Sample records for acoustics temporal patterns

  1. Syllable acoustics, temporal patterns, and call composition vary with behavioral context in Mexican free-tailed bats

    PubMed Central

    Bohn, Kirsten M.; Schmidt-French, Barbara; Ma, Sean T.; Pollak, George D.

    2008-01-01

    Recent research has shown that some bat species have rich vocal repertoires with diverse syllable acoustics. Few studies, however, have compared vocalizations across different behavioral contexts or examined the temporal emission patterns of vocalizations. In this paper, a comprehensive examination of the vocal repertoire of Mexican free-tailed bats, T. brasiliensis, is presented. Syllable acoustics and temporal emission patterns for 16 types of vocalizations including courtship song revealed three main findings. First, although in some cases syllables are unique to specific calls, other syllables are shared among different calls. Second, entire calls associated with one behavior can be embedded into more complex vocalizations used in entirely different behavioral contexts. Third, when different calls are composed of similar syllables, distinctive temporal emission patterns may facilitate call recognition. These results indicate that syllable acoustics alone do not likely provide enough information for call recognition; rather, the acoustic context and temporal emission patterns of vocalizations may affect meaning. PMID:19045674

  2. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    PubMed

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Habitat-associated and temporal patterns of bat activity in a diverse forest landscape of southern New England, USA

    Treesearch

    Robert T. Brooks

    2009-01-01

    The development and use of acoustic recording technology, surveys have revealed the composition, relative levels of activity, and preliminary habitat use of bat communities of various forest locations. However, detailed examinations of acoustic surveys results to investigate temporal patterns of bat activity are rare. Initial active acoustic surveys of bat activity on...

  4. Soundscapes from a Tropical Eastern Pacific reef and a Caribbean Sea reef

    NASA Astrophysics Data System (ADS)

    Staaterman, E.; Rice, A. N.; Mann, D. A.; Paris, C. B.

    2013-06-01

    Underwater soundscapes vary due to the abiotic and biological components of the habitat. We quantitatively characterized the acoustic environments of two coral reef habitats, one in the Tropical Eastern Pacific (Panama) and one in the Caribbean (Florida Keys), over 2-day recording durations in July 2011. We examined the frequency distribution, temporal variability, and biological patterns of sound production and found clear differences. The Pacific reef exhibited clear biological patterns and high temporal variability, such as the onset of snapping shrimp noise at night, as well as a 400-Hz daytime band likely produced by damselfish. In contrast, the Caribbean reef had high sound levels in the lowest frequencies, but lacked clear temporal patterns. We suggest that acoustic measures are an important element to include in reef monitoring programs, as the acoustic environment plays an important role in the ecology of reef organisms at multiple life-history stages.

  5. Temporal patterns in marine mammal sounds from long-term broadband recordings

    NASA Astrophysics Data System (ADS)

    Hildebrand, John A.; Wiggins, Sean; Oleson, Erin; Sirovic, Ana; Munger, Lisa; Soldevilla, Melissa; Burtenshaw, Jessica

    2005-09-01

    Recent advances in the technology for long-term underwater acoustic recording provide new data on the temporal patterns of marine mammal sounds. Autonomous acoustic recordings are now being made with broad frequency bandwidth up to 200-kHz sampling rates. These data allow sound recording from most marine mammal species, including, for instance, the echolocation clicks of odontocetes. Large data storage capacity up to 1280 Gbytes allow these recordings to be conducted over long time periods for study of diel and seasonal calling patterns. Examples will be presented of temporal patterns from long-term recordings collected in four regions: the Bering Sea, offshore southern California, the Gulf of California, and the Southern Ocean. These data provide new insight on marine mammal distribution, seasonality, and behavior.

  6. Laser-speckle-visibility acoustic spectroscopy in soft turbid media.

    PubMed

    Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard

    2014-01-01

    We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light, which is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam. It may be applied to other kinds of acoustic waves in different forms of turbid soft matter such as biological tissues, pastes, or concentrated emulsions.

  7. Laser-speckle-visibility acoustic spectroscopy in soft turbid media

    NASA Astrophysics Data System (ADS)

    Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard

    2014-01-01

    We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light, which is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam. It may be applied to other kinds of acoustic waves in different forms of turbid soft matter such as biological tissues, pastes, or concentrated emulsions.

  8. Acoustic and temporal partitioning of cicada assemblages in city and mountain environments.

    PubMed

    Shieh, Bao-Sen; Liang, Shih-Hsiung; Chiu, Yuh-Wen

    2015-01-01

    Comparing adaptations to noisy city environments with those to natural mountain environments on the community level can provide significant insights that allow an understanding of the impact of anthropogenic noise on invertebrates that employ loud calling songs for mate attraction, especially when each species has its distinct song, as in the case of cicadas. In this study, we investigated the partitioning strategy of cicada assemblages in city and mountain environments by comparing the acoustic features and calling activity patterns of each species, recorded using automated digital recording systems. Our comparison of activity patterns of seasonal and diel calling revealed that there was no significant temporal partitioning of cicada assemblages in either environment. In addition, there was no correlation between the acoustic distance based on spectral features and temporal segregation. Heterospecific spectral overlap was low in both city and mountain environments, although city and mountain cicada assemblages were subject to significantly different levels of anthropogenic or interspecific noise. Furthermore, for the common species found in both environments, the calling activity patterns at both seasonal and diel time scales were significantly consistent across sites and across environments. We suggest that the temporal calling activity is constrained by endogenous factors for each species and is less flexible in response to external factors, such as anthropogenic noise. As a result, cicada assemblages in city environments with low species diversity do not demonstrate a more significant temporal partitioning than those in mountain environments with high species diversity.

  9. Acoustic and Temporal Partitioning of Cicada Assemblages in City and Mountain Environments

    PubMed Central

    Shieh, Bao-Sen; Liang, Shih-Hsiung; Chiu, Yuh-Wen

    2015-01-01

    Comparing adaptations to noisy city environments with those to natural mountain environments on the community level can provide significant insights that allow an understanding of the impact of anthropogenic noise on invertebrates that employ loud calling songs for mate attraction, especially when each species has its distinct song, as in the case of cicadas. In this study, we investigated the partitioning strategy of cicada assemblages in city and mountain environments by comparing the acoustic features and calling activity patterns of each species, recorded using automated digital recording systems. Our comparison of activity patterns of seasonal and diel calling revealed that there was no significant temporal partitioning of cicada assemblages in either environment. In addition, there was no correlation between the acoustic distance based on spectral features and temporal segregation. Heterospecific spectral overlap was low in both city and mountain environments, although city and mountain cicada assemblages were subject to significantly different levels of anthropogenic or interspecific noise. Furthermore, for the common species found in both environments, the calling activity patterns at both seasonal and diel time scales were significantly consistent across sites and across environments. We suggest that the temporal calling activity is constrained by endogenous factors for each species and is less flexible in response to external factors, such as anthropogenic noise. As a result, cicada assemblages in city environments with low species diversity do not demonstrate a more significant temporal partitioning than those in mountain environments with high species diversity. PMID:25590620

  10. How females of chirping and trilling field crickets integrate the 'what' and 'where' of male acoustic signals during decision making.

    PubMed

    Gabel, Eileen; Gray, David A; Matthias Hennig, R

    2016-11-01

    In crickets acoustic communication serves mate selection. Female crickets have to perceive and integrate male cues relevant for mate choice while confronted with several different signals in an acoustically diverse background. Overall female decisions are based on the attractiveness of the temporal pattern (informative about the 'what') and on signal intensity (informative about the 'where') of male calling songs. Here, we investigated how the relevant cues for mate choice are integrated during the decision process by females of five different species of chirping and trilling field crickets. Using a behavioral design, female preferences in no-choice and choice situations for male calling songs differing in pulse rate, modulation depth, intensities, chirp/trill arrangements and temporal shifts were examined. Sensory processing underlying decisions in female field crickets is rather similar as combined evidence suggested that incoming song patterns were analyzed separately by bilaterally paired networks for pattern attractiveness and pattern intensity. A downstream gain control mechanism leads to a weighting of the intensity cue by pattern attractiveness. While remarkable differences between species were observed with respect to specific processing steps, closely related species exhibited more similar preferences than did more distantly related species.

  11. Using Passive and Active Acoustics to Examine Relationships of Cetacean and Prey Densities

    DTIC Science & Technology

    2015-09-30

    modulation or production to the marine soundscape with daily, lunar, and seasonal patterns. We aim to document how presence and intensity of certain...sounds relate to spatio-temporal variability of active acoustic backscatter strength. Additionally, several marine mammal species are predators of deep...scattering layer (DSL) species as well as krill. We intend to investigate if passive acoustic marine mammal detections are related to increased

  12. Effects of spectral and temporal disruption on cortical encoding of gerbil vocalizations

    PubMed Central

    Ter-Mikaelian, Maria; Semple, Malcolm N.

    2013-01-01

    Animal communication sounds contain spectrotemporal fluctuations that provide powerful cues for detection and discrimination. Human perception of speech is influenced both by spectral and temporal acoustic features but is most critically dependent on envelope information. To investigate the neural coding principles underlying the perception of communication sounds, we explored the effect of disrupting the spectral or temporal content of five different gerbil call types on neural responses in the awake gerbil's primary auditory cortex (AI). The vocalizations were impoverished spectrally by reduction to 4 or 16 channels of band-passed noise. For this acoustic manipulation, an average firing rate of the neuron did not carry sufficient information to distinguish between call types. In contrast, the discharge patterns of individual AI neurons reliably categorized vocalizations composed of only four spectral bands with the appropriate natural token. The pooled responses of small populations of AI cells classified spectrally disrupted and natural calls with an accuracy that paralleled human performance on an analogous speech task. To assess whether discharge pattern was robust to temporal perturbations of an individual call, vocalizations were disrupted by time-reversing segments of variable duration. For this acoustic manipulation, cortical neurons were relatively insensitive to short reversal lengths. Consistent with human perception of speech, these results indicate that the stable representation of communication sounds in AI is more dependent on sensitivity to slow temporal envelopes than on spectral detail. PMID:23761696

  13. Acoustic signalling for mate attraction in crickets: Abdominal ganglia control the timing of the calling song pattern.

    PubMed

    Jacob, Pedro F; Hedwig, Berthold

    2016-08-01

    Decoding the neural basis of behaviour requires analysing how the nervous system is organised and how the temporal structure of motor patterns emerges from its activity. The stereotypical patterns of the calling song behaviour of male crickets, which consists of chirps and pulses, is an ideal model to study this question. We applied selective lesions to the abdominal nervous system of field crickets and performed long-term acoustic recordings of the songs. Specific lesions to connectives or ganglia abolish singing or reliably alter the temporal features of the chirps and pulses. Singing motor control appears to be organised in a modular and hierarchically fashion, where more posterior ganglia control the timing of the chirp pattern and structure and anterior ganglia the timing of the pulses. This modular organisation may provide the substrate for song variants underlying calling, courtship and rivalry behaviour and for the species-specific song patterns in extant crickets. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  14. Laser speckle visibility acoustic spectroscopy in soft turbid media

    NASA Astrophysics Data System (ADS)

    Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard

    2014-03-01

    We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light and the speckle visibility[2] is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam.[3] It may be applied to other kinds of acoustic wave in different forms of turbid soft matter, such as biological tissues, pastes or concentrated emulsions. Now at Université Lyon 1 (ILM).

  15. Acoustic cue weighting in the singleton vs geminate contrast in Lebanese Arabic: The case of fricative consonants.

    PubMed

    Al-Tamimi, Jalal; Khattab, Ghada

    2015-07-01

    This paper is the first reported investigation of the role of non-temporal acoustic cues in the singleton-geminate contrast in Lebanese Arabic, alongside the more frequently reported temporal cues. The aim is to explore the extent to which singleton and geminate consonants show qualitative differences in a language where phonological length is prominent and where moraic structure governs segment timing and syllable weight. Twenty speakers (ten male, ten female) were recorded producing trochaic disyllables with medial singleton and geminate fricatives preceded by phonologically short and long vowels. The following acoustic measures were applied on the medial fricative and surrounding vowels: absolute duration; intensity; fundamental frequency; spectral peak and shape, dynamic amplitude, and voicing patterns of medial fricatives; and vowel quality and voice quality correlates of surrounding vowels. Discriminant analysis and receiver operating characteristics (ROC) curves were used to assess each acoustic cue's contribution to the singleton-geminate contrast. Classification rates of 89% and ROC curves with an area under the curve rate of 96% confirmed the major role played by temporal cues, with non-temporal cues contributing to the contrast but to a much lesser extent. These results confirm that the underlying contrast for gemination in Arabic is temporal, but highlight [+tense] (fortis) as a secondary feature.

  16. Within-individual variation in bullfrog vocalizations: implications for a vocally mediated social recognition system.

    PubMed

    Bee, Mark A

    2004-12-01

    Acoustic signals provide a basis for social recognition in a wide range of animals. Few studies, however, have attempted to relate the patterns of individual variation in signals to behavioral discrimination thresholds used by receivers to discriminate among individuals. North American bullfrogs (Rana catesbeiana) discriminate among familiar and unfamiliar individuals based on individual variation in advertisement calls. The sources, patterns, and magnitudes of variation in eight acoustic properties of multiple-note advertisement calls were examined to understand how patterns of within-individual variation might either constrain, or provide additional cues for, vocal recognition. Six of eight acoustic properties exhibited significant note-to-note variation within multiple-note calls. Despite this source of within-individual variation, all call properties varied significantly among individuals, and multivariate analyses indicated that call notes were individually distinct. Fine-temporal and spectral call properties exhibited less within-individual variation compared to gross-temporal properties and contributed most toward statistically distinguishing among individuals. Among-individual differences in the patterns of within-individual variation in some properties suggest that within-individual variation could also function as a recognition cue. The distributions of among-individual and within-individual differences were used to generate hypotheses about the expected behavioral discrimination thresholds of receivers.

  17. Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech

    PubMed Central

    Leong, Victoria; Goswami, Usha

    2015-01-01

    When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72–82% (freely-read CDS) and 90–98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across languages. The S-AMPH model reveals a crucial developmental role for stress feet (AMs ~2 Hz). Stress feet underpin different linguistic rhythm typologies, and speech rhythm underpins language acquisition by infants in all languages. PMID:26641472

  18. Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech.

    PubMed

    Leong, Victoria; Goswami, Usha

    2015-01-01

    When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72-82% (freely-read CDS) and 90-98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across languages. The S-AMPH model reveals a crucial developmental role for stress feet (AMs ~2 Hz). Stress feet underpin different linguistic rhythm typologies, and speech rhythm underpins language acquisition by infants in all languages.

  19. Sound production patterns from humpback whales in a high latitude foraging area

    NASA Astrophysics Data System (ADS)

    Stimpert, Alison K.; Wiley, David N.; Barton, Kira L.; Johnson, Mark P.; Lammers, Marc O.; Au, Whitlow W. L.

    2005-09-01

    Numerous studies have been conducted on humpback whale song, but substantially fewer have focused on the acoustic properties of non-song sound production (i.e., feeding and social sounds). Non-invasive digital acoustic recording tags (DTAGS) were attached to humpback whales on the western North Atlantics Great South Channel feeding grounds during July 2004. Acoustic records totaling 48.4 data hours from four of these attachments were aurally analyzed for temporal trends in whale signal production. A custom automatic detection function was also used to identify occurrences of specific signals and evaluate their temporal consistency. Patterns in sound usage varied by stage of foraging dive and by time of day. Amount of time with signals present was greater at the bottom of dives than during surface periods, indicating that sounds are probably related to foraging at depth. For the two tags that recorded at night, signals were present during a greater proportion of daylight hours than night hours. These results will be compared with previously published trends describing diel patterns in male humpback whale song chorusing on the breeding grounds. Data from the continuation of this research during the summer of 2005 will also be included.

  20. Left hemisphere lateralization for lexical and acoustic pitch processing in Cantonese speakers as revealed by mismatch negativity.

    PubMed

    Gu, Feng; Zhang, Caicai; Hu, Axu; Zhao, Guoping

    2013-12-01

    For nontonal language speakers, speech processing is lateralized to the left hemisphere and musical processing is lateralized to the right hemisphere (i.e., function-dependent brain asymmetry). On the other hand, acoustic temporal processing is lateralized to the left hemisphere and spectral/pitch processing is lateralized to the right hemisphere (i.e., acoustic-dependent brain asymmetry). In this study, we examine whether the hemispheric lateralization of lexical pitch and acoustic pitch processing in tonal language speakers is consistent with the patterns of function- and acoustic-dependent brain asymmetry in nontonal language speakers. Pitch contrast in both speech stimuli (syllable /ji/ in Experiment 1) and nonspeech stimuli (harmonic tone in Experiment 1; pure tone in Experiment 2) was presented to native Cantonese speakers in passive oddball paradigms. We found that the mismatch negativity (MMN) elicited by lexical pitch contrast was lateralized to the left hemisphere, which is consistent with the pattern of function-dependent brain asymmetry (i.e., left hemisphere lateralization for speech processing) in nontonal language speakers. However, the MMN elicited by acoustic pitch contrast was also left hemisphere lateralized (harmonic tone in Experiment 1) or showed a tendency for left hemisphere lateralization (pure tone in Experiment 2), which is inconsistent with the pattern of acoustic-dependent brain asymmetry (i.e., right hemisphere lateralization for acoustic pitch processing) in nontonal language speakers. The consistent pattern of function-dependent brain asymmetry and the inconsistent pattern of acoustic-dependent brain asymmetry between tonal and nontonal language speakers can be explained by the hypothesis that the acoustic-dependent brain asymmetry is the consequence of a carryover effect from function-dependent brain asymmetry. Potential evolutionary implication of this hypothesis is discussed. © 2013.

  1. Neural coding strategies in auditory cortex.

    PubMed

    Wang, Xiaoqin

    2007-07-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.

  2. Discrimination of acoustic communication signals by grasshoppers (Chorthippus biguttulus): temporal resolution, temporal integration, and the impact of intrinsic noise.

    PubMed

    Ronacher, Bernhard; Wohlgemuth, Sandra; Vogel, Astrid; Krahe, Rüdiger

    2008-08-01

    A characteristic feature of hearing systems is their ability to resolve both fast and subtle amplitude modulations of acoustic signals. This applies also to grasshoppers, which for mate identification rely mainly on the characteristic temporal patterns of their communication signals. Usually the signals arriving at a receiver are contaminated by various kinds of noise. In addition to extrinsic noise, intrinsic noise caused by stochastic processes within the nervous system contributes to making signal recognition a difficult task. The authors asked to what degree intrinsic noise affects temporal resolution and, particularly, the discrimination of similar acoustic signals. This study aims at exploring the neuronal basis for sexual selection, which depends on exploiting subtle differences between basically similar signals. Applying a metric, by which the similarities of spike trains can be assessed, the authors investigated how well the communication signals of different individuals of the same species could be discriminated and correctly classified based on the responses of auditory neurons. This spike train metric yields clues to the optimal temporal resolution with which spike trains should be evaluated. (c) 2008 APA, all rights reserved

  3. Fine structure of acoustic signals caused by a drop falling onto the surface of water

    NASA Astrophysics Data System (ADS)

    Chashechkin, Yu. D.; Prokhorov, V. E.

    2015-08-01

    The temporal structure of sound radiation upon a drop falling onto a free liquid surface is investigated experimentally by high-resolution high-speed videorecording synchronized with a broad-band measurement of the acoustic pressure. Groups of short and relatively prolonged sound packets with frequency filling from 2 to 50 kHz and the corresponding flow patterns including the simultaneous formation of resonating bubbles and their interaction processes with an originating cavern are isolated. The temporal dependence of the determining parameter, i.e., the Weber number, which is stably reproduced in a series of experiments by a power function with a fractional index, is constructed.

  4. Social Vocalizations of Big Brown Bats Vary with Behavioral Context

    PubMed Central

    Gadziola, Marie A.; Grimsley, Jasmine M. S.; Faure, Paul A.; Wenstrup, Jeffrey J.

    2012-01-01

    Bats are among the most gregarious and vocal mammals, with some species demonstrating a diverse repertoire of syllables under a variety of behavioral contexts. Despite extensive characterization of big brown bat (Eptesicus fuscus) biosonar signals, there have been no detailed studies of adult social vocalizations. We recorded and analyzed social vocalizations and associated behaviors of captive big brown bats under four behavioral contexts: low aggression, medium aggression, high aggression, and appeasement. Even limited to these contexts, big brown bats possess a rich repertoire of social vocalizations, with 18 distinct syllable types automatically classified using a spectrogram cross-correlation procedure. For each behavioral context, we describe vocalizations in terms of syllable acoustics, temporal emission patterns, and typical syllable sequences. Emotion-related acoustic cues are evident within the call structure by context-specific syllable types or variations in the temporal emission pattern. We designed a paradigm that could evoke aggressive vocalizations while monitoring heart rate as an objective measure of internal physiological state. Changes in the magnitude and duration of elevated heart rate scaled to the level of evoked aggression, confirming the behavioral state classifications assessed by vocalizations and behavioral displays. These results reveal a complex acoustic communication system among big brown bats in which acoustic cues and call structure signal the emotional state of a caller. PMID:22970247

  5. Effects of subsampling of passive acoustic recordings on acoustic metrics.

    PubMed

    Thomisch, Karolin; Boebel, Olaf; Zitterbart, Daniel P; Samaran, Flore; Van Parijs, Sofie; Van Opzeeland, Ilse

    2015-07-01

    Passive acoustic monitoring is an important tool in marine mammal studies. However, logistics and finances frequently constrain the number and servicing schedules of acoustic recorders, requiring a trade-off between deployment periods and sampling continuity, i.e., the implementation of a subsampling scheme. Optimizing such schemes to each project's specific research questions is desirable. This study investigates the impact of subsampling on the accuracy of two common metrics, acoustic presence and call rate, for different vocalization patterns (regimes) of baleen whales: (1) variable vocal activity, (2) vocalizations organized in song bouts, and (3) vocal activity with diel patterns. To this end, above metrics are compared for continuous and subsampled data subject to different sampling strategies, covering duty cycles between 50% and 2%. The results show that a reduction of the duty cycle impacts negatively on the accuracy of both acoustic presence and call rate estimates. For a given duty cycle, frequent short listening periods improve accuracy of daily acoustic presence estimates over few long listening periods. Overall, subsampling effects are most pronounced for low and/or temporally clustered vocal activity. These findings illustrate the importance of informed decisions when applying subsampling strategies to passive acoustic recordings or analyses for a given target species.

  6. The influence of gender on auditory and language cortical activation patterns: preliminary data.

    PubMed

    Kocak, Mehmet; Ulmer, John L; Biswal, Bharat B; Aralasmak, Ayse; Daniels, David L; Mark, Leighton P

    2005-10-01

    Intersex cortical and functional asymmetry is an ongoing topic of investigation. In this pilot study, we sought to determine the influence of acoustic scanner noise and sex on auditory and language cortical activation patterns of the dominant hemisphere. Echoplanar functional MR imaging (fMRI; 1.5T) was performed on 12 healthy right-handed subjects (6 men and 6 women). Passive text listening tasks were employed in 2 different background acoustic scanner noise conditions (12 sections/2 seconds TR [6 Hz] and 4 sections/2 seconds TR [2 Hz]), with the first 4 sections in identical locations in the left hemisphere. Cross-correlation analysis was used to construct activation maps in subregions of auditory and language relevant cortex of the dominant (left) hemisphere, and activation areas were calculated by using coefficient thresholds of 0.5, 0.6, and 0.7. Text listening caused robust activation in anatomically defined auditory cortex, and weaker activation in language relevant cortex of all 12 individuals. As a whole, there was no significant difference in regional cortical activation between the 2 background acoustic scanner noise conditions. When sex was considered, men showed a significantly (P < .01) greater change in left hemisphere activation during the high scanner noise rate condition than did women. This effect was significant (P < .05) in the left superior temporal gyrus, the posterior aspect of the left middle temporal gyrus and superior temporal sulcus, and the left inferior frontal gyrus. Increase in the rate of background acoustic scanner noise caused increased activation in auditory and language relevant cortex of the dominant hemisphere in men compared with women where no such change in activation was observed. Our preliminary data suggest possible methodologic confounds of fMRI research and calls for larger investigations to substantiate our findings and further characterize sex-based influences on hemispheric activation patterns.

  7. Acoustic wave propagation and intensity fluctuations in shallow water 2006 experiment

    NASA Astrophysics Data System (ADS)

    Luo, Jing

    Fluctuations of low frequency sound propagation in the presence of nonlinear internal waves during the Shallow Water 2006 experiment are analyzed. Acoustic waves and environmental data including on-board ship radar images were collected simultaneously before, during, and after a strong internal solitary wave packet passed through a source-receiver acoustic track. Analysis of the acoustic wave signals shows temporal intensity fluctuations. These fluctuations are affected by the passing internal wave and agrees well with the theory of the horizontal refraction of acoustic wave propagation in shallow water. The intensity focusing and defocusing that occurs in a fixed source-receiver configuration while internal wave packet approaches and passes the acoustic track is addressed in this thesis. Acoustic ray-mode theory is used to explain the modal evolution of broadband acoustic waves propagating in a shallow water waveguide in the presence of internal waves. Acoustic modal behavior is obtained from the data through modal decomposition algorithms applied to data collected by a vertical line array of hydrophones. Strong interference patterns are observed in the acoustic data, whose main cause is identified as the horizontal refraction referred to as the horizontal Lloyd mirror effect. To analyze this interference pattern, combined Parabolic Equation model and Vertical-mode horizontal-ray model are utilized. A semi-analytic formula for estimating the horizontal Lloyd mirror effect is developed.

  8. Snapping shrimp sound production patterns on Caribbean coral reefs: relationships with celestial cycles and environmental variables

    NASA Astrophysics Data System (ADS)

    Lillis, Ashlee; Mooney, T. Aran

    2018-06-01

    The rich acoustic environment of coral reefs, including the sounds of a variety of fish and invertebrates, is a reflection of the structural complexity and biological diversity of these habitats. Emerging interest in applying passive acoustic monitoring and soundscape analysis to measure coral reef habitat characteristics and track ecological patterns is hindered by a poor understanding of the most common and abundant sound producers on reefs—the snapping shrimp. Here, we sought to address several basic biophysical drivers of reef sound by investigating acoustic activity patterns of snapping shrimp populations on two adjacent coral reefs using a detailed snap detection analysis routine to a high-resolution 2.5-month acoustic dataset from the US Virgin Islands. The reefs exhibited strong diel and lunar periodicity in snap rates and clear spatial differences in snapping levels. Snap rates peaked at dawn and dusk and were higher overall during daytime versus nighttime, a seldom-reported pattern in earlier descriptions of diel snapping shrimp acoustic activity. Small differences between the sites in snap rate rhythms were detected and illustrate how analyses of specific soundscape elements might reveal subtle between-reef variation. Snap rates were highly correlated with environmental variables, including water temperature and light, and were found to be sensitive to changes in oceanographic forcing. This study further establishes snapping shrimp as key players in the coral reef chorus and provides evidence that their acoustic output reflects a combination of environmental conditions, celestial influences, and spatial habitat variation. Effective application of passive acoustic monitoring in coral reef habitats using snap rates or snapping-influenced acoustic metrics will require a mechanistic understanding of the underlying spatial and temporal variation in snapping shrimp sound production across multiple scales.

  9. Acoustic estimates of zooplankton and micronekton biomass in cyclones and anticyclones of the northeastern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Ressler, Patrick Henry

    2001-12-01

    In the Gulf of Mexico (GOM), coarse to mesoscale eddies can enhance the supply of limiting nutrients into the euphotic zone, elevating primary production. This leads to 'oases' of enriched standing stocks of zooplankton and micronekton in otherwise oligotrophic deepwater (>200 m bottom depth). A combination of acoustic volume backscattering (Sv) measurements with an acoustic Doppler current profiler (ADCP) and concurrent net sampling of zooplankton and micronekton biomass in GOM eddy fields between October 1996 and November 1998 confirmed that cyclones and flow confluences were areas of locally enhanced Sv and standing stock biomass. Net samples were used both to 'sea-truth' the acoustic measurements and to assess the influence of taxonomic composition on measured Sv. During October 1996 and August 1997, a mesoscale (200--300 km diameter) cyclone-anticyclone pair in the northeastern GOM was surveyed as part of a cetacean (whale and dolphin) and seabird habitat, study. Acoustic estimates of biomass in the upper 10--50 m of the water column showed that the cyclone and flow confluence were enriched relative to anticyclonic Loop Current Eddies during both years. Cetacean and seabird survey results reported by other project researchers imply that these eddies provide preferential habitat because they foster locally higher concentrations of higher-trophic-level prey. Sv measurements in November 1997 and 1998 showed that coarse scale eddies (30--150 km diameter) probably enhanced nutrients and S, in the deepwater GOM within 100 km of the Mississippi delta, an area suspected to be important habitat for cetaceans and seabirds. Finally, Sv, data collected during November-December 1997 and October-December 1998 from a mooring at the head of DeSoto Canyon in the northeastern GOM revealed temporal variability at a single location: characteristic temporal decorrelation scales were 1 day (diel vertical migration of zooplankton and micronekton) and 5 days (advective processes). A combination of acoustic and net sampling is a useful way to survey temporal and spatial patterns in zooplankton and micronekton biomass in coarse to mesoscale eddies. Further research should employ such a combination of methods to investigate plankton patterns in eddies and their implications for cetacean and seabird habitat.

  10. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves

    PubMed Central

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J. R.; Krenner, Hubert J.; Wixforth, Achim; Salditt, Tim

    2014-01-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length). PMID:25294979

  11. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves.

    PubMed

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J R; Krenner, Hubert J; Wixforth, Achim; Salditt, Tim

    2014-10-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length).

  12. Evolutionary diversification of the auditory organ sensilla in Neoconocephalus katydids (Orthoptera: Tettigoniidae) correlates with acoustic signal diversification over phylogenetic relatedness and life history.

    PubMed

    Strauß, J; Alt, J A; Ekschmitt, K; Schul, J; Lakes-Harlan, R

    2017-06-01

    Neoconocephalus Tettigoniidae are a model for the evolution of acoustic signals as male calls have diversified in temporal structure during the radiation of the genus. The call divergence and phylogeny in Neoconocephalus are established, but in tettigoniids in general, accompanying evolutionary changes in hearing organs are not studied. We investigated anatomical changes of the tympanal hearing organs during the evolutionary radiation and divergence of intraspecific acoustic signals. We compared the neuroanatomy of auditory sensilla (crista acustica) from nine Neoconocephalus species for the number of auditory sensilla and the crista acustica length. These parameters were correlated with differences in temporal call features, body size, life histories and different phylogenetic positions. By this, adaptive responses to shifting frequencies of male calls and changes in their temporal patterns can be evaluated against phylogenetic constraints and allometry. All species showed well-developed auditory sensilla, on average 32-35 between species. Crista acustica length and sensillum numbers correlated with body size, but not with phylogenetic position or life history. Statistically significant correlations existed also with specific call patterns: a higher number of auditory sensilla occurred in species with continuous calls or slow pulse rates, and a longer crista acustica occurred in species with double pulses or slow pulse rates. The auditory sensilla show significant differences between species despite their recent radiation, and morphological and ecological similarities. This indicates the responses to natural and sexual selection, including divergence of temporal and spectral signal properties. Phylogenetic constraints are unlikely to limit these changes of the auditory systems. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  13. Advancing-side directivity and retreating-side interactions of model rotor blade-vortex interaction noise

    NASA Technical Reports Server (NTRS)

    Martin, R. M.; Splettstoesser, W. R.; Elliott, J. W.; Schultz, K.-J.

    1988-01-01

    Acoustic data are presented from a 40 percent scale model of the four-bladed BO-105 helicopter main rotor, tested in a large aerodynamic wind tunnel. Rotor blade-vortex interaction (BVI) noise data in the low-speed flight range were acquired using a traversing in-flow microphone array. Acoustic results presented are used to assess the acoustic far field of BVI noise, to map the directivity and temporal characteristics of BVI impulsive noise, and to show the existence of retreating-side BVI signals. The characterics of the acoustic radiation patterns, which can often be strongly focused, are found to be very dependent on rotor operating condition. The acoustic signals exhibit multiple blade-vortex interactions per blade with broad impulsive content at lower speeds, while at higher speeds, they exhibit fewer interactions per blade, with much sharper, higher amplitude acoustic signals. Moderate-amplitude BVI acoustic signals measured under the aft retreating quadrant of the rotor are shown to originate from the retreating side of the rotor.

  14. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    PubMed

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities.

  15. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns

    PubMed Central

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities. PMID:27932941

  16. Suppressed Alpha Oscillations Predict Intelligibility of Speech and its Acoustic Details

    PubMed Central

    Weisz, Nathan

    2012-01-01

    Modulations of human alpha oscillations (8–13 Hz) accompany many cognitive processes, but their functional role in auditory perception has proven elusive: Do oscillatory dynamics of alpha reflect acoustic details of the speech signal and are they indicative of comprehension success? Acoustically presented words were degraded in acoustic envelope and spectrum in an orthogonal design, and electroencephalogram responses in the frequency domain were analyzed in 24 participants, who rated word comprehensibility after each trial. First, the alpha power suppression during and after a degraded word depended monotonically on spectral and, to a lesser extent, envelope detail. The magnitude of this alpha suppression exhibited an additional and independent influence on later comprehension ratings. Second, source localization of alpha suppression yielded superior parietal, prefrontal, as well as anterior temporal brain areas. Third, multivariate classification of the time–frequency pattern across participants showed that patterns of late posterior alpha power allowed best for above-chance classification of word intelligibility. Results suggest that both magnitude and topography of late alpha suppression in response to single words can indicate a listener's sensitivity to acoustic features and the ability to comprehend speech under adverse listening conditions. PMID:22100354

  17. Age-Related Neural Oscillation Patterns During the Processing of Temporally Manipulated Speech.

    PubMed

    Rufener, Katharina S; Oechslin, Mathias S; Wöstmann, Malte; Dellwo, Volker; Meyer, Martin

    2016-05-01

    This EEG-study aims to investigate age-related differences in the neural oscillation patterns during the processing of temporally modulated speech. Viewing from a lifespan perspective, we recorded the electroencephalogram (EEG) data of three age samples: young adults, middle-aged adults and older adults. Stimuli consisted of temporally degraded sentences in Swedish-a language unfamiliar to all participants. We found age-related differences in phonetic pattern matching when participants were presented with envelope-degraded sentences, whereas no such age-effect was observed in the processing of fine-structure-degraded sentences. Irrespective of age, during speech processing the EEG data revealed a relationship between envelope information and the theta band (4-8 Hz) activity. Additionally, an association between fine-structure information and the gamma band (30-48 Hz) activity was found. No interaction, however, was found between acoustic manipulation of stimuli and age. Importantly, our main finding was paralleled by an overall enhanced power in older adults in high frequencies (gamma: 30-48 Hz). This occurred irrespective of condition. For the most part, this result is in line with the Asymmetric Sampling in Time framework (Poeppel in Speech Commun 41:245-255, 2003), which assumes an isomorphic correspondence between frequency modulations in neurophysiological patterns and acoustic oscillations in spoken language. We conclude that speech-specific neural networks show strong stability over adulthood, despite initial processes of cortical degeneration indicated by enhanced gamma power. The results of our study therefore confirm the concept that sensory and cognitive processes undergo multidirectional trajectories within the context of healthy aging.

  18. The acoustic adaptation hypothesis in a widely distributed South American frog: Southernmost signals propagate better.

    PubMed

    Velásquez, Nelson A; Moreno-Gómez, Felipe N; Brunetti, Enzo; Penna, Mario

    2018-05-03

    Animal communication occurs in environments that affect the properties of signals as they propagate from senders to receivers. We studied the geographic variation of the advertisement calls of male Pleurodema thaul individuals from eight localities in Chile. Furthermore, by means of signal propagation experiments, we tested the hypothesis that local calls are better transmitted and less degraded than foreign calls (i.e. acoustic adaptation hypothesis). Overall, the advertisement calls varied greatly along the distribution of P. thaul in Chile, and it was possible to discriminate localities grouped into northern, central and southern stocks. Propagation distance affected signal amplitude and spectral degradation in all localities, but temporal degradation was only affected by propagation distance in one out of seven localities. Call origin affected signal amplitude in five out of seven localities and affected spectral and temporal degradation in six out of seven localities. In addition, in northern localities, local calls degraded more than foreign calls, and in southern localities the opposite was observed. The lack of a strict optimal relationship between signal characteristics and environment indicates partial concordance with the acoustic adaptation hypothesis. Inter-population differences in selectivity for call patterns may compensate for such environmental constraints on acoustic communication.

  19. Spatio-temporal dynamics of turbulence trapped in geodesic acoustic modes

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Kobayashi, T.; Itoh, K.; Kasuya, N.; Kosuga, Y.; Fujisawa, A.; Itoh, S.-I.

    2018-01-01

    The spatio-temporal dynamics of turbulence with the interaction of geodesic acoustic modes (GAMs) are investigated, focusing on the phase-space structure of turbulence, where the phase-space consists of real-space and wavenumber-space. Based on the wave-kinetic framework, the coupling equation between the GAM and the turbulence is numerically solved. The turbulence trapped by the GAM velocity field is obtained. Due to the trapping effect, the turbulence intensity increases where the second derivative of the GAM velocity (curvature of the GAM) is negative. While, in the positive-curvature region, the turbulence is suppressed. Since the trapped turbulence propagates with the GAMs, this relationship is sustained spatially and temporally. The dynamics of the turbulence in the wavenumber spectrum are converted in the evolution of the frequency spectrum, and the simulation result is compared with the experimental observation in JFT-2M tokamak, where the similar patterns are obtained. The turbulence trapping effect is a key to understand the spatial structure of the turbulence in the presence of sheared flows.

  20. Selective attention to temporal features on nested time scales.

    PubMed

    Henry, Molly J; Herrmann, Björn; Obleser, Jonas

    2015-02-01

    Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time. We used a novel paradigm where listeners judged either the duration or modulation rate of auditory stimuli, and in which the stimulation, working memory demands, response requirements, and task difficulty were held constant. A first analysis identified all brain regions where individual brain activation patterns were correlated with individual behavioral performance patterns, which thus supported temporal judgments generically. A second analysis then isolated those brain regions that specifically regulated selective attention to temporal features: Neural responses in a bilateral fronto-parietal network including insular cortex and basal ganglia decreased with degree of change of the attended temporal feature. Critically, response patterns in these regions were inverted when the task required selectively ignoring this feature. The results demonstrate how the neural analysis of complex acoustic stimuli with multiple temporal features depends on a fronto-parietal network that simultaneously regulates the selective gain for attended and ignored temporal features. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Recurrence plot analysis of nonstationary data: the understanding of curved patterns.

    PubMed

    Facchini, A; Kantz, H; Tiezzi, E

    2005-08-01

    Recurrence plots of the calls of the Nomascus concolor (Western black crested gibbon) and Hylobates lar (White-handed gibbon) show characteristic circular, curved, and hyperbolic patterns superimposed to the main temporal scale of the signal. It is shown that these patterns are related to particular nonstationarities in the signal. Some of them can be reproduced by artificial signals like frequency modulated sinusoids and sinusoids with time divergent frequency. These modulations are too faint to be resolved by conventional time-frequency analysis with similar precision. Therefore, recurrence plots act as a magnifying glass for the detection of multiple temporal scales in slightly modulated signals. The detected phenomena in these acoustic signals can be explained in the biomechanical context by taking in account the role of the muscles controlling the vocal folds.

  2. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review

    PubMed Central

    Schomers, Malte R.; Pulvermüller, Friedemann

    2016-01-01

    In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding. PMID:27708566

  3. A multimodal spectral approach to characterize rhythm in natural speech.

    PubMed

    Alexandrou, Anna Maria; Saarinen, Timo; Kujala, Jan; Salmelin, Riitta

    2016-01-01

    Human utterances demonstrate temporal patterning, also referred to as rhythm. While simple oromotor behaviors (e.g., chewing) feature a salient periodical structure, conversational speech displays a time-varying quasi-rhythmic pattern. Quantification of periodicity in speech is challenging. Unimodal spectral approaches have highlighted rhythmic aspects of speech. However, speech is a complex multimodal phenomenon that arises from the interplay of articulatory, respiratory, and vocal systems. The present study addressed the question of whether a multimodal spectral approach, in the form of coherence analysis between electromyographic (EMG) and acoustic signals, would allow one to characterize rhythm in natural speech more efficiently than a unimodal analysis. The main experimental task consisted of speech production at three speaking rates; a simple oromotor task served as control. The EMG-acoustic coherence emerged as a sensitive means of tracking speech rhythm, whereas spectral analysis of either EMG or acoustic amplitude envelope alone was less informative. Coherence metrics seem to distinguish and highlight rhythmic structure in natural speech.

  4. The Interaction of Temporal and Spectral Acoustic Information with Word Predictability on Speech Intelligibility

    NASA Astrophysics Data System (ADS)

    Shahsavarani, Somayeh Bahar

    High-level, top-down information such as linguistic knowledge is a salient cortical resource that influences speech perception under most listening conditions. But, are all listeners able to exploit these resources for speech facilitation to the same extent? It was found that children with cochlear implants showed different patterns of benefit from contextual information in speech perception compared with their normal-haring peers. Previous studies have discussed the role of non-acoustic factors such as linguistic and cognitive capabilities to account for this discrepancy. Given the fact that the amount of acoustic information encoded and processed by auditory nerves of listeners with cochlear implants differs from normal-hearing listeners and even varies across individuals with cochlear implants, it is important to study the interaction of specific acoustic properties of the speech signal with contextual cues. This relationship has been mostly neglected in previous research. In this dissertation, we aimed to explore how different acoustic dimensions interact to affect listeners' abilities to combine top-down information with bottom-up information in speech perception beyond the known effects of linguistic and cognitive capacities shown previously. Specifically, the present study investigated whether there were any distinct context effects based on the resolution of spectral versus slowly-varying temporal information in perception of spectrally impoverished speech. To that end, two experiments were conducted. In both experiments, a noise-vocoded technique was adopted to generate spectrally-degraded speech to approximate acoustic cues delivered to listeners with cochlear implants. The frequency resolution was manipulated by varying the number of frequency channels. The temporal resolution was manipulated by low-pass filtering of amplitude envelope with varying low-pass cutoff frequencies. The stimuli were presented to normal-hearing native speakers of American English. Our results revealed a significant interaction effect between spectral, temporal, and contextual information in the perception of spectrally-degraded speech. This suggests that specific types and degradation of bottom-up information combine differently to utilize contextual resources. These findings emphasize the importance of taking the listener's specific auditory abilities into consideration while studying context effects. These results also introduce a novel perspective for designing interventions for listeners with cochlear implants or other auditory prostheses.

  5. Applying acoustic telemetry to understand contaminant exposure and bioaccumulation patterns in mobile fishes.

    PubMed

    Taylor, Matthew D; van der Meulen, Dylan E; Brodie, Stephanie; Cadiou, Gwenaël; Knott, Nathan A

    2018-06-01

    Contamination in urbanised estuaries presents a risk to human health, and to the viability of populations of exploited species. Assessing animal movements in relation to contaminated areas may help to explain patterns in bioaccumulation, and assist in the effective management of health risks associated with consumption of exploited species. Using polychlorinated dibenzodioxin and polychlorinated dibenzofuran (PCDD/Fs) contamination in Sydney Harbour estuary as a case study, we present a study that links movement patterns resolved using acoustic telemetry to the accumulation of contaminants in mobile fish on a multi-species basis. Fifty-four individuals across six exploited species (Sea Mullet Mugil cephalus; Luderick Girella tricuspidata; Yellowfin Bream Acanthopagrus australis; Silver Trevally Pseudocaranx georgianus; Mulloway Argyrosomus japonicus; Yellowtail Kingfish Seriola lalandi) were tagged with acoustic transmitters, and their movements tracked for up to 3years. There was substantial inter-specific variation in fish distribution along the estuary. The proportion of distribution that overlapped with contaminated areas explained 84-98% of the inter-specific variation in lipid-standardised biota PCDD/F concentration. There was some seasonal variation in distribution along the estuary, but movement patterns indicated that Sea Mullet, Yellowfin Bream, Silver Trevally, and Mulloway were likely to be exposed to contaminated areas during the period of gonadal maturation. Acoustic telemetry allows examination of spatial and temporal patterns in exposure to contamination. When used alongside biota sampling and testing, this offers a powerful approach to assess exposure, bioaccumulation, and potential risks faced by different species, as well as human health risks associated with their consumption. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  6. Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    Gedamke, Jason

    An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and associated contextual data of recorded sounds were analyzed. Two categories of sound are described here: (1) patterned song, which was regularly repeated in one of three patterns: slow, fast, and rapid-clustered repetition, and (2) non-patterned "social" sounds recorded from gregarious assemblages of whales. These discrete acoustic signals may comprise a graded system of communication (Slow/fast song → Rapid-clustered song → Social sounds) that is related to the spacing between whales.

  7. Diversity of fish sound types in the Pearl River Estuary, China

    PubMed Central

    Wang, Zhi-Tao; Nowacek, Douglas P.; Akamatsu, Tomonari; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang

    2017-01-01

    Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus), and 1 + N19 might be produced by Belanger’s croaker (J. belangerii). Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed. PMID:29085746

  8. Diversity of fish sound types in the Pearl River Estuary, China.

    PubMed

    Wang, Zhi-Tao; Nowacek, Douglas P; Akamatsu, Tomonari; Wang, Ke-Xiong; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang; Wang, Ding

    2017-01-01

    Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N 10 might belong to big-snout croaker ( Johnius macrorhynus ), and 1 + N 19 might be produced by Belanger's croaker ( J. belangerii ). Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin ( Sousa chinensis ) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed.

  9. It's about time: Presentation in honor of Ira Hirsh

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    Over his long and illustrious career, Ira Hirsh has returned time and time again to his interest in the temporal aspects of pattern perception. Although Hirsh has studied and published articles and books pertaining to many aspects of the auditory system, such as sound conduction in the ear, cochlear mechanics, masking, auditory localization, psychoacoustic behavior in animals, speech perception, medical and audiological applications, coupling between psychophysics and physiology, and ecological acoustics, it is his work on auditory timing of simple and complex rhythmic patterns, the backbone of speech and music, that are at the heart of his more recent work. Here, we will focus on several aspects of temporal processing of simple and complex signals, both within and across sensory systems. Data will be reviewed on temporal order judgments of simple tones, and simultaneity judgments and intelligibility of unimodal and bimodal complex stimuli where stimulus components are presented either synchronously or asynchronously. Differences in the symmetry and shape of ``temporal windows'' derived from these data sets will be highlighted.

  10. Cicadas impact bird communication in a noisy tropical rainforest

    PubMed Central

    Hall, Robert; Ray, William; Beck, Angela; Zook, James

    2015-01-01

    Many animals communicate through acoustic signaling, and “acoustic space” may be viewed as a limited resource that organisms compete for. If acoustic signals overlap, the information in them is masked, so there should be selection toward strategies that reduce signal overlap. The extent to which animals are able to partition acoustic space in acoustically diverse habitats such as tropical forests is poorly known. Here, we demonstrate that a single cicada species plays a major role in the frequency and timing of acoustic communication in a neotropical wet forest bird community. Using an automated acoustic monitor, we found that cicadas vary the timing of their signals throughout the day and that the frequency range and timing of bird vocalizations closely track these signals. Birds significantly avoid temporal overlap with cicadas by reducing and often shutting down vocalizations at the onset of cicada signals that utilize the same frequency range. When birds do vocalize at the same time as cicadas, the vocalizations primarily occur at nonoverlapping frequencies with cicada signals. Our results greatly improve our understanding of the community dynamics of acoustic signaling and reveal how patterns in biotic noise shape the frequency and timing of bird vocalizations in tropical forests. PMID:26023277

  11. Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns.

    PubMed

    Altenmüller, Eckart; Schürmann, Kristian; Lim, Vanessa K; Parlitz, Dietrich

    2002-01-01

    In order to investigate the neurobiological mechanisms accompanying emotional valence judgements during listening to complex auditory stimuli, cortical direct current (dc)-electroencephalography (EEG) activation patterns were recorded from 16 right-handed students. Students listened to 160 short sequences taken from the repertoires of jazz, rock-pop, classical music and environmental sounds (each n=40). Emotional valence of the perceived stimuli were rated on a 5-step scale after each sequence. Brain activation patterns during listening revealed widespread bilateral fronto-temporal activation, but a highly significant lateralisation effect: positive emotional attributions were accompanied by an increase in left temporal activation, negative by a more bilateral pattern with preponderance of the right fronto-temporal cortex. Female participants demonstrated greater valence-related differences than males. No differences related to the four stimulus categories could be detected, suggesting that the actual auditory brain activation patterns were more determined by their affective emotional valence than by differences in acoustical "fine" structure. The results are consistent with a model of hemispheric specialisation concerning perceived positive or negative emotions proposed by Heilman [Journal of Neuropsychiatry and Clinical Neuroscience 9 (1997) 439].

  12. Acoustic detection of Oryctes rhinoceros (Coleoptera: Scarabaeidae: Dynastinae) and Nasutitermes luzonicus (Isoptera: Termitidae) in palm trees in urban Guam.

    PubMed

    Mankin, R W; Moore, A

    2010-08-01

    Adult and larval Oryctes rhinoceros (L.) (Coleoptera: Scarabaeidae: Dynastinae) were acoustically detected in live and dead palm trees and logs in recently invaded areas of Guam, along with Nasutitermes luzonicus Oshima (Isoptera: Termitidae), and other small, sound-producing invertebrates and invertebrates. The low-frequency, long-duration sound-impulse trains produced by large, active O. rhinoceros and the higher frequency, shorter impulse trains produced by feeding N. luzonicus had distinctive spectral and temporal patterns that facilitated their identification and discrimination from background noise, as well as from roaches, earwigs, and other small sound-producing organisms present in the trees and logs. The distinctiveness of the O. rhinoceros sounds enables current usage of acoustic detection as a tactic in Guam's ongoing O. rhinoceros eradication program.

  13. Use of acoustic technology to monitor the time course of Rhynchophorus ferrugineus larval mortality in date palms after treatments with Beauveria bassiana

    USDA-ARS?s Scientific Manuscript database

    Spectral and temporal patterns of insect sound impulses were monitored daily for 23-d periods in 8, 10, or 5 small date palm trees containing larvae dipped in 0 (control), 104 (low), or 108 (high) conidia/ml doses of entomopathogenic fungus, Beauveria bassiana (Bb 203), respectively. Each tree conta...

  14. The Importance of Ambient Sound Level to Characterise Anuran Habitat

    PubMed Central

    Goutte, Sandra; Dubois, Alain; Legendre, Frédéric

    2013-01-01

    Habitat characterisation is a pivotal step of any animal ecology study. The choice of variables used to describe habitats is crucial and need to be relevant to the ecology and behaviour of the species, in order to reflect biologically meaningful distribution patterns. In many species, acoustic communication is critical to individuals’ interactions, and it is expected that ambient acoustic conditions impact their local distribution. Yet, classic animal ecology rarely integrates an acoustic dimension in habitat descriptions. Here we show that ambient sound pressure level (SPL) is a strong predictor of calling site selection in acoustically active frog species. In comparison to six other habitat-related variables (i.e. air and water temperature, depth, width and slope of the stream, substrate), SPL had the most important explanatory power in microhabitat selection for the 34 sampled species. Ambient noise was particularly useful in differentiating two stream-associated guilds: torrents and calmer streams dwelling species. Guild definitions were strongly supported by SPL, whereas slope, which is commonly used in stream-associated habitat, had a weak explanatory power. Moreover, slope measures are non-standardized across studies and are difficult to assess at small scale. We argue that including an acoustic descriptor will improve habitat-species analyses for many acoustically active taxa. SPL integrates habitat topology and temporal information (such as weather and hour of the day, for example) and is a simple and precise measure. We suggest that habitat description in animal ecology should include an acoustic measure such as noise level because it may explain previously misunderstood distribution patterns. PMID:24205070

  15. Seasonal and diel patterns in cetacean use and foraging at a potential marine renewable energy site.

    PubMed

    Nuuttila, Hanna K; Bertelli, Chiara M; Mendzil, Anouska; Dearle, Nessa

    2018-04-01

    Marine renewable energy (MRE) developments often coincide with sites frequented by small cetaceans. To understand habitat use and assess potential impact from development, echolocation clicks were recorded with acoustic click loggers (C-PODs) in Swansea Bay, Wales (UK). General Additive Models (GAMs) were applied to assess the effects of covariates including month, hour, tidal range and temperature. Analysis of inter-click intervals allowed the identification of potential foraging events as well as patterns of presence and absence. Data revealed year-round presence of porpoise, with distinct seasonal and diel patterns. Occasional acoustic encounters of dolphins were also recorded. This study provides further evidence of the need for assessing temporal trends in cetacean presence and habitat use in areas considered for development. These findings could assist MRE companies to monitor and mitigate against disturbance from construction, operation and decommissioning activities by avoiding times when porpoise presence and foraging activity is highest in the area. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Emotions in Speech

    NASA Astrophysics Data System (ADS)

    Sobin, Christina Ann

    This study was undertaken to examine the acoustical encoding of fear, anger, sadness and joy in voice. Twenty emotion-induction stories were read by 31 subjects who produced a total of 620 emotion-laden standard sentences. Subjects rated their emotions, and the acoustics of each sentence were analyzed. Twelve judges were employed to rate the emotion of each sentence, and their ratings were used to select "prototype" sentences for each emotion. The acoustical characteristics distinguishing each emotion were calculated. Rate, amount of time spent talking and pausing, and number of gaps, in addition to amplitude, frequency and their variances, uniquely distinguished among fear, anger, sadness and joy. Results of past studies were confirmed, and additional differentiation among the emotions was achieved. Judges' confusion matrices were analyzed in order to assess the relationship of detectability and discriminability to acoustic characteristics. It was found that the detectability and/or discriminability of fear, anger, sadness and joy, to varying degrees, paralleled the amount of acoustical overlap among them. A further test of the acoustic findings suggested that mean values of acoustic variables may accurately describe the acoustic cues to sadness and joy, but perhaps not to fear and anger. Thus, additional acoustic parameters, such as the temporal pattern of the acoustic measures, may inform raters. It is suggested that time-based profiles of amplitude and frequency may offer a plausible addition to future research endeavors.

  17. Intraspecific scaling in frog calls: the interplay of temperature, body size and metabolic condition.

    PubMed

    Ziegler, Lucia; Arim, Matías; Bozinovic, Francisco

    2016-07-01

    Understanding physiological and environmental determinants of strategies of reproductive allocation is a pivotal aim in biology. Because of their high metabolic cost, properties of sexual acoustic signals may correlate with body size, temperature, and an individual's energetic state. A quantitative theory of acoustic communication, based on the metabolic scaling with temperature and mass, was recently proposed, adding to the well-reported empirical patterns. It provides quantitative predictions for frequencies, call rate, and durations. Here, we analysed the mass, temperature, and body condition scaling of spectral and temporal attributes of the advertisement call of the treefrog Hypsiboas pulchellus. Mass dependence of call frequency followed metabolic expectations (f~M (-0.25), where f is frequency and M is mass) although non-metabolic allometry could also account for the observed pattern. Temporal variables scaled inversely with mass contradicting metabolic expectations (d~M (0.25), where d is duration), supporting instead empirical patterns reported to date. Temperature was positively associated with call rate and negatively with temporal variables, which is congruent with metabolic predictions. We found no significant association between temperature and frequencies, adding to the bulk of empirical evidence. Finally, a result of particular relevance was that body condition consistently determined call characteristics, in interaction with temperature or mass. Our intraspecific study highlights that even if proximate determinants of call variability are rather well understood, the mechanisms through which they operate are proving to be more complex than previously thought. The determinants of call characteristics emerge as a key topic of research in behavioural and physiological biology, with several clear points under debate which need to be analysed on theoretical and empirical grounds.

  18. In situ measurement of geoacoustic sediment properties: An example from the ONR Mine Burial Program, Martha's Vineyard Coastal Observatory

    NASA Astrophysics Data System (ADS)

    Kraft, Barbara J.; Mayer, Larry A.; Simpkin, Peter G.; Goff, John A.

    2003-04-01

    In support of the Office of Naval Research's Mine Burial Program (MBP), in situ acoustic and resistivity measurements were obtained using ISSAP, a device developed and built by the Center for Coastal and Ocean Mapping. One of the field areas selected for the MBP experiments is the WHOI coastal observatory based off Martha's Vineyard. This area is an active natural laboratory that will provide an ideal environment for testing and observing mine migration and burial patterns due to temporal seabed processes. Seawater and surficial sediment measurements of compressional wave sound speed, attenuation, and resistivity were obtained at 87 stations. The ISSAP instrument used four transducer probes arranged in a square pattern giving acoustic path lengths of 30 and 20 cm with a maximum insertion depth of 15 cm. Transducers operated at a frequency of 65 kHz. The received acoustic signal was sampled at a frequency of 5 MHz. A measurement cycle was completed by transmitting 10 pulses on each of the five paths and repeating three times for a total 150 measurements. Resistivity measurements were obtained from two probes mounted on ISSAP following completion of the acoustic measurements. [Research supported by ONR Grant Nos. N00014-00-1-0821 and N00014-02-1-0138.

  19. Pen-chant: Acoustic emissions of handwriting and drawing

    NASA Astrophysics Data System (ADS)

    Seniuk, Andrew G.

    The sounds generated by a writing instrument ('pen-chant') provide a rich and underutilized source of information for pattern recognition. We examine the feasibility of recognition of handwritten cursive text, exclusively through an analysis of acoustic emissions. We design and implement a family of recognizers using a template matching approach, with templates and similarity measures derived variously from: smoothed amplitude signal with fixed resolution, discrete sequence of magnitudes obtained from peaks in the smoothed amplitude signal, and ordered tree obtained from a scale space signal representation. Test results are presented for recognition of isolated lowercase cursive characters and for whole words. We also present qualitative results for recognizing gestures such as circling, scratch-out, check-marks, and hatching. Our first set of results, using samples provided by the author, yield recognition rates of over 70% (alphabet) and 90% (26 words), with a confidence of +/-8%, based solely on acoustic emissions. Our second set of results uses data gathered from nine writers. These results demonstrate that acoustic emissions are a rich source of information, usable---on their own or in conjunction with image-based features---to solve pattern recognition problems. In future work, this approach can be applied to writer identification, handwriting and gesture-based computer input technology, emotion recognition, and temporal analysis of sketches.

  20. Patterns of acoustic variation in Cicada barbara Stål (Hemiptera, Cicadoidea) from the Iberian Peninsula and Morocco.

    PubMed

    Pinto-Juma, G A; Seabra, S G; Quartau, J A

    2008-02-01

    Field recordings of the calling song and of an amplitude modulated signal produced by males of Cicada barbara from North Africa and the Iberian Peninsula were analysed in order to assess the geographical acoustic variation and the potential usefulness of acoustic data in the discrimination of subspecies and populations. Sound recordings were digitized and the frequency and temporal properties of the calls of each cicada were analysed. In all regions studied, peak frequency, quartiles 25, 50 and 75% and syllable rate showed low coefficients of variation suggesting inherent static properties. All frequency variables were correlated with the latitude, decreasing from south to north. In addition, most acoustic variables of the calling song showed significant differences between regions, and PCA and DFA analyses supported a partitioning within this species between Iberian Peninsula+Ceuta and Morocco, corroborating mtDNA data on the same species. Therefore, the subspecific division of C. barbara into C. barbara barbara from Morocco and C. barbara lusitanica from Portugal, Spain and Ceuta finds support from the present acoustic analyses, a result which is also reinforced by molecular markers.

  1. Propagation modeling for sperm whale acoustic clicks in the northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Sidorovskaia, Natalia A.; Udovydchenkov, Ilya A.; Rypina, Irina I.; Ioup, George E.; Ioup, Juliette W.; Caruthers, Jerald W.; Newcomb, Joal; Fisher, Robert

    2004-05-01

    Simulations of acoustic broadband (500-6000 Hz) pulse propagation in the northern Gulf of Mexico, based on environmental data collected as a part of the Littoral Acoustic Demonstration Center (LADC) experiments in the summers of 2001 and 2002, are presented. The results of the modeling support the hypothesis that consistent spectrogram interference patterns observed in the LADC marine mammal phonation data cannot be explained by the propagation effects for temporal analysis windows corresponding to the duration of an animal click, and may be due to a uniqueness of an individual animal phonation apparatus. The utilization of simulation data for the development of an animal tracking algorithm based on the acoustic recordings of a single bottom-moored hydrophone is discussed. The identification of the bottom and surface reflected clicks from the same animal is attempted. The critical ranges for listening to a deep-water forging animal by a surface receiving system are estimated. [Research supported by ONR.

  2. Study of acoustic correlates associate with emotional speech

    NASA Astrophysics Data System (ADS)

    Yildirim, Serdar; Lee, Sungbok; Lee, Chul Min; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Ebrahim; Narayanan, Shrikanth

    2004-10-01

    This study investigates the acoustic characteristics of four different emotions expressed in speech. The aim is to obtain detailed acoustic knowledge on how a speech signal is modulated by changes from neutral to a certain emotional state. Such knowledge is necessary for automatic emotion recognition and classification and emotional speech synthesis. Speech data obtained from two semi-professional actresses are analyzed and compared. Each subject produces 211 sentences with four different emotions; neutral, sad, angry, happy. We analyze changes in temporal and acoustic parameters such as magnitude and variability of segmental duration, fundamental frequency and the first three formant frequencies as a function of emotion. Acoustic differences among the emotions are also explored with mutual information computation, multidimensional scaling and acoustic likelihood comparison with normal speech. Results indicate that speech associated with anger and happiness is characterized by longer duration, shorter interword silence, higher pitch and rms energy with wider ranges. Sadness is distinguished from other emotions by lower rms energy and longer interword silence. Interestingly, the difference in formant pattern between [happiness/anger] and [neutral/sadness] are better reflected in back vowels such as /a/(/father/) than in front vowels. Detailed results on intra- and interspeaker variability will be reported.

  3. Reversing pathologically increased EEG power by acoustic coordinated reset neuromodulation

    PubMed Central

    Adamchic, Ilya; Toth, Timea; Hauptmann, Christian; Tass, Peter Alexander

    2014-01-01

    Acoustic Coordinated Reset (CR) neuromodulation is a patterned stimulation with tones adjusted to the patient's dominant tinnitus frequency, which aims at desynchronizing pathological neuronal synchronization. In a recent proof-of-concept study, CR therapy, delivered 4–6 h/day more than 12 weeks, induced a significant clinical improvement along with a significant long-lasting decrease of pathological oscillatory power in the low frequency as well as γ band and an increase of the α power in a network of tinnitus-related brain areas. As yet, it remains unclear whether CR shifts the brain activity toward physiological levels or whether it induces clinically beneficial, but nonetheless abnormal electroencephalographic (EEG) patterns, for example excessively decreased δ and/or γ. Here, we compared the patients' spontaneous EEG data at baseline as well as after 12 weeks of CR therapy with the spontaneous EEG of healthy controls by means of Brain Electrical Source Analysis source montage and standardized low-resolution brain electromagnetic tomography techniques. The relationship between changes in EEG power and clinical scores was investigated using a partial least squares approach. In this way, we show that acoustic CR neuromodulation leads to a normalization of the oscillatory power in the tinnitus-related network of brain areas, most prominently in temporal regions. A positive association was found between the changes in tinnitus severity and the normalization of δ and γ power in the temporal, parietal, and cingulate cortical regions. Our findings demonstrate a widespread CR-induced normalization of EEG power, significantly associated with a reduction of tinnitus severity. PMID:23907785

  4. Conveying Movement in Music and Prosody

    PubMed Central

    Hedger, Stephen C.; Nusbaum, Howard C.; Hoeckner, Berthold

    2013-01-01

    We investigated whether acoustic variation of musical properties can analogically convey descriptive information about an object. Specifically, we tested whether information from the temporal structure in music interacts with perception of a visual image to form an analog perceptual representation as a natural part of music perception. In Experiment 1, listeners heard music with an accelerating or decelerating temporal pattern, and then saw a picture of a still or moving object and decided whether it was animate or inanimate – a task unrelated to the patterning of the music. Object classification was faster when musical motion matched visually depicted motion. In Experiment 2, participants heard spoken sentences that were accompanied by accelerating or decelerating music, and then were presented with a picture of a still or moving object. When motion information in the music matched motion information in the picture, participants were similarly faster to respond. Fast and slow temporal patterns without acceleration and deceleration, however, did not make participants faster when they saw a picture depicting congruent motion information (Experiment 3), suggesting that understanding temporal structure information in music may depend on specific metaphors about motion in music. Taken together, these results suggest that visuo-spatial referential information can be analogically conveyed and represented by music and can be integrated with speech or influence the understanding of speech. PMID:24146920

  5. Spectrotemporal Modulation Detection and Speech Perception by Cochlear Implant Users

    PubMed Central

    Won, Jong Ho; Moon, Il Joon; Jin, Sunhwa; Park, Heesung; Woo, Jihwan; Cho, Yang-Sun; Chung, Won-Ho; Hong, Sung Hwa

    2015-01-01

    Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test–retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information. PMID:26485715

  6. Can a model of overlapping gestures account for scanning speech patterns?

    PubMed

    Tjaden, K

    1999-06-01

    A simple acoustic model of overlapping, sliding gestures was used to evaluate whether coproduction was reduced for neurologic speakers with scanning speech patterns. F2 onset frequency was used as an acoustic measure of coproduction or gesture overlap. The effects of speaking rate (habitual versus fast) and utterance position (initial versus medial) on F2 frequency, and presumably gesture overlap, were examined. Regression analyses also were used to evaluate the extent to which across-repetition temporal variability in F2 trajectories could be explained as variation in coproduction for consonants and vowels. The lower F2 onset frequencies for disordered speakers suggested that gesture overlap was reduced for neurologic individuals with scanning speech. Speaking rate change did not influence F2 onset frequencies, and presumably gesture overlap, for healthy or disordered speakers. F2 onset frequency differences for utterance-initial and -medial repetitions were interpreted to suggest reduced coproduction for the utterance-initial position. The utterance-position effects on F2 onset frequency, however, likely were complicated by position-related differences in articulatory scaling. The results of the regression analysis indicated that gesture sliding accounts, in part, for temporal variability in F2 trajectories. Taken together, the results of this study provide support for the idea that speech production theory for healthy talkers helps to account for disordered speech production.

  7. Effect of acoustic similarity on short-term auditory memory in the monkey

    PubMed Central

    Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo

    2013-01-01

    Recent evidence suggests that the monkey’s short-term memory in audition depends on a passively retained sensory trace as opposed to a trace reactivated from long-term memory for use in working memory. Reliance on a passive sensory trace could render memory particularly susceptible to confusion between sounds that are similar in some acoustic dimension. If so, then in delayed matching-to-sample, the monkey’s performance should be predicted by the similarity in the salient acoustic dimension between the sample and subsequent test stimulus, even at very short delays. To test this prediction and isolate the acoustic features relevant to short-term memory, we examined the pattern of errors made by two rhesus monkeys performing a serial, auditory delayed match-to-sample task with interstimulus intervals of 1 s. The analysis revealed that false-alarm errors did indeed result from similarity-based confusion between the sample and the subsequent nonmatch stimuli. Manipulation of the stimuli showed that removal of spectral cues was more disruptive to matching behavior than removal of temporal cues. In addition, the effect of acoustic similarity on false-alarm response was stronger at the first nonmatch stimulus than at the second one. This pattern of errors would be expected if the first nonmatch stimulus overwrote the sample’s trace, and suggests that the passively retained trace is not only vulnerable to similarity-based confusion but is also highly susceptible to overwriting. PMID:23376550

  8. Effect of acoustic similarity on short-term auditory memory in the monkey.

    PubMed

    Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo

    2013-04-01

    Recent evidence suggests that the monkey's short-term memory in audition depends on a passively retained sensory trace as opposed to a trace reactivated from long-term memory for use in working memory. Reliance on a passive sensory trace could render memory particularly susceptible to confusion between sounds that are similar in some acoustic dimension. If so, then in delayed matching-to-sample, the monkey's performance should be predicted by the similarity in the salient acoustic dimension between the sample and subsequent test stimulus, even at very short delays. To test this prediction and isolate the acoustic features relevant to short-term memory, we examined the pattern of errors made by two rhesus monkeys performing a serial, auditory delayed match-to-sample task with interstimulus intervals of 1 s. The analysis revealed that false-alarm errors did indeed result from similarity-based confusion between the sample and the subsequent nonmatch stimuli. Manipulation of the stimuli showed that removal of spectral cues was more disruptive to matching behavior than removal of temporal cues. In addition, the effect of acoustic similarity on false-alarm response was stronger at the first nonmatch stimulus than at the second one. This pattern of errors would be expected if the first nonmatch stimulus overwrote the sample's trace, and suggests that the passively retained trace is not only vulnerable to similarity-based confusion but is also highly susceptible to overwriting. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Impaired extraction of speech rhythm from temporal modulation patterns in speech in developmental dyslexia

    PubMed Central

    Leong, Victoria; Goswami, Usha

    2014-01-01

    Dyslexia is associated with impaired neural representation of the sound structure of words (phonology). The “phonological deficit” in dyslexia may arise in part from impaired speech rhythm perception, thought to depend on neural oscillatory phase-locking to slow amplitude modulation (AM) patterns in the speech envelope. Speech contains AM patterns at multiple temporal rates, and these different AM rates are associated with phonological units of different grain sizes, e.g., related to stress, syllables or phonemes. Here, we assess the ability of adults with dyslexia to use speech AMs to identify rhythm patterns (RPs). We study 3 important temporal rates: “Stress” (~2 Hz), “Syllable” (~4 Hz) and “Sub-beat” (reduced syllables, ~14 Hz). 21 dyslexics and 21 controls listened to nursery rhyme sentences that had been tone-vocoded using either single AM rates from the speech envelope (Stress only, Syllable only, Sub-beat only) or pairs of AM rates (Stress + Syllable, Syllable + Sub-beat). They were asked to use the acoustic rhythm of the stimulus to identity the original nursery rhyme sentence. The data showed that dyslexics were significantly poorer at detecting rhythm compared to controls when they had to utilize multi-rate temporal information from pairs of AMs (Stress + Syllable or Syllable + Sub-beat). These data suggest that dyslexia is associated with a reduced ability to utilize AMs <20 Hz for rhythm recognition. This perceptual deficit in utilizing AM patterns in speech could be underpinned by less efficient neuronal phase alignment and cross-frequency neuronal oscillatory synchronization in dyslexia. Dyslexics' perceptual difficulties in capturing the full spectro-temporal complexity of speech over multiple timescales could contribute to the development of impaired phonological representations for words, the cognitive hallmark of dyslexia across languages. PMID:24605099

  10. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound.

    PubMed

    Menze, Sebastian; Zitterbart, Daniel P; van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales ( Balaenoptera musculus intermedia ), fin whales ( Balaenoptera physalus ), Antarctic minke whales ( Balaenoptera bonaerensis ) and leopard seals ( Hydrurga leptonyx ). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.

  11. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound

    NASA Astrophysics Data System (ADS)

    Menze, Sebastian; Zitterbart, Daniel P.; van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales (Balaenoptera musculus intermedia), fin whales (Balaenoptera physalus), Antarctic minke whales (Balaenoptera bonaerensis) and leopard seals (Hydrurga leptonyx). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.

  12. Central pattern generator for vocalization: is there a vertebrate morphotype?

    PubMed

    Bass, Andrew H

    2014-10-01

    Animals that generate acoustic signals for social communication are faced with two essential tasks: generate a temporally precise signal and inform the auditory system about the occurrence of one's own sonic signal. Recent studies of sound producing fishes delineate a hindbrain network comprised of anatomically distinct compartments coding equally distinct neurophysiological properties that allow an organism to meet these behavioral demands. A set of neural characters comprising a vocal-sonic central pattern generator (CPG) morphotype is proposed for fishes and tetrapods that shares evolutionary developmental origins with pectoral appendage motor systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Central pattern generator for vocalization: Is there a vertebrate morphotype?

    PubMed Central

    Bass, Andrew H.

    2014-01-01

    Animals that generate acoustic signals for social communication are faced with two essential tasks: generate a temporally precise signal and inform the auditory system about the occurrence of one’s own sonic signal. Recent studies of sound producing fishes delineate a hindbrain network comprised of anatomically distinct compartments coding equally distinct neurophysiological properties that allow an organism to meet these behavioral demands. A set of neural characters comprising a vocal-sonic central pattern generator (CPG) morphotype is proposed for fishes and tetrapods that shares evolutionary developmental origins with pectoral appendage motor systems. PMID:25050813

  14. Multi-voxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    PubMed Central

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350

  15. Spatial and Temporal Variations in the Occurrence and Foraging Activity of Coastal Dolphins in Menai Bay, Zanzibar, Tanzania.

    PubMed

    Temple, Andrew J; Tregenza, Nick; Amir, Omar A; Jiddawi, Narriman; Berggren, Per

    2016-01-01

    Understanding temporal patterns in distribution, occurrence and behaviour is vital for the effective conservation of cetaceans. This study used cetacean click detectors (C-PODs) to investigate spatial and temporal variation in occurrence and foraging activity of the Indo-Pacific bottlenose (Tursiops aduncus) and Indian Ocean humpback (Sousa plumbea) dolphins resident in the Menai Bay Conservation Area (MBCA), Zanzibar, Tanzania. Occurrence was measured using detection positive minutes. Inter-click intervals were used to identify terminal buzz vocalisations, allowing for analysis of foraging activity. Data were analysed in relation to spatial (location) and temporal (monsoon season, diel phase and tidal phase) variables. Results showed significantly increased occurrence and foraging activity of dolphins in southern areas and during hours of darkness. Higher occurrence at night was not explained by diel variation in echolocation rate and so were considered representative of occurrence patterns. Both tidal phase and monsoon season influenced occurrence but results varied among sites, with no general patterns found. Foraging activity was greatest during hours of darkness, High water and Flood tidal phases. Comparisons of echolocation data among sites suggested differences in the broadband click spectra of MBCA dolphins, possibly indicative of species differences. These dolphin populations are threatened by unsustainable fisheries bycatch and tourism activities. The spatial and temporal patterns identified in this study have implications for future conservation and management actions with regards to these two threats. Further, the results indicate future potential for using passive acoustics to identify and monitor the occurrence of these two species in areas where they co-exist.

  16. Characteristics of spectro-temporal modulation frequency selectivity in humans.

    PubMed

    Oetjen, Arne; Verhey, Jesko L

    2017-03-01

    There is increasing evidence that the auditory system shows frequency selectivity for spectro-temporal modulations. A recent study of the authors has shown spectro-temporal modulation masking patterns that were in agreement with the hypothesis of spectro-temporal modulation filters in the human auditory system [Oetjen and Verhey (2015). J. Acoust. Soc. Am. 137(2), 714-723]. In the present study, that experimental data and additional data were used to model this spectro-temporal frequency selectivity. The additional data were collected to investigate to what extent the spectro-temporal modulation-frequency selectivity results from a combination of a purely temporal amplitude-modulation filter and a purely spectral amplitude-modulation filter. In contrast to the previous study, thresholds were measured for masker and target modulations with opposite directions, i.e., an upward pointing target modulation and a downward pointing masker modulation. The comparison of this data set with previous corresponding data with the same direction from target and masker modulations indicate that a specific spectro-temporal modulation filter is required to simulate all aspects of spectro-temporal modulation frequency selectivity. A model using a modified Gabor filter with a purely temporal and a purely spectral filter predicts the spectro-temporal modulation masking data.

  17. Compressive Strength Estimation of Marble Specimens using Acoustic Emission Hits in Time and Natural Time Domains: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Hloupis, George; Stavrakas, Ilias; Vallianatos, Filippos; Triantis, Dimos

    2013-04-01

    The current study deals with preliminary results of characteristic patterns derived from acoustic emissions during compressional stress. Two loading cycles were applied to a specimen of 4cm x 4cm x 10 cm Dionysos marble while acoustic emissions (AE) were recorded using one acoustic sensor coupled at the expected direction of the main crack (at the center of the specimen). The produced time series comprised from the number of counts per AE hit under increasing and constant load. Processing took place in two domains: in conventional time domain (t), using multiresolution wavelet analysis for the study of temporal variation of the wavelet-coefficients' standard deviation (SDEV) [1] and in natural time domain (χ), using the variance (κ1) of natural-time transformed time-series [2,3]. Results in both cases, dictate that identification of the region where the increasing stress (σ), exceeds 40% of the ultimate compressional strength (σ*), is possible. More specific, in conventional time domain, the temporal evolution of SDEV presents a sharp change around σ* during first loading cycle and less than σ* during second loading cycle. In natural time domain, the κ1 value clearly oscillate around 0.07 at natural time indexes corresponding to σ* during first loading cycle. Merging both results leads to a preliminary observation that we have an identification of the time when the compressional stress exceeds σ*. References [1] Telesca, L., Hloupis, G., Nikolintaga, I., Vallianatos, F.,."Temporal patterns in southern Aegean seismicity revealed by the multiresolution wavelet analysis", Communications in Nonlinear Science and Numerical Simulation, vol. 12, issue 8, pp 1418-1426, 2007 [2] P. A. Varotsos, N. V. Sarlis, and E. S. Skordas, "Natural Time Analysis: The New View of Time. Precursory Seismic Electric Signals, Earthquakes and other Complex Time-Series", Springer-Verlag, Berlin, Heidelberg, 2011. [3] N. V. Sarlis, P. A. Varotsos, and E. S. Skordas, "Flux Avalances in YBa2Cu307-x films and rice piles: natural time domain analysis", Physical Review B, 73, 054504, 2006. Acknowledgements This work was supported by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project entitled "Integrated understanding of Seismicity, using innovative Methodologies of Fracture mechanics along with Earthquake and non extensive statistical physics - Application to the geodynamic system of the Hellenic Arc. SEISMO FEAR HELLARC".

  18. Decision making and preferences for acoustic signals in choice situations by female crickets.

    PubMed

    Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias

    2015-08-01

    Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.

  19. Peripheral mechanisms for vocal production in birds - differences and similarities to human speech and singing.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-10-01

    Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.

  20. Temporal patterns in the soundscape of the shallow waters of a Mediterranean marine protected area.

    PubMed

    Buscaino, Giuseppa; Ceraulo, Maria; Pieretti, Nadia; Corrias, Valentina; Farina, Almo; Filiciotto, Francesco; Maccarrone, Vincenzo; Grammauta, Rosario; Caruso, Francesco; Giuseppe, Alonge; Mazzola, Salvatore

    2016-09-28

    The study of marine soundscapes is an emerging field of research that contributes important information about biological compositions and environmental conditions. The seasonal and circadian soundscape trends of a marine protected area (MPA) in the Mediterranean Sea have been studied for one year using an autonomous acoustic recorder. Frequencies less than 1 kHz are dominated by noise generated by waves and are louder during the winter; conversely, higher frequencies (4-96 kHz) are dominated by snapping shrimp, which increase their acoustic activity at night during the summer. Fish choruses, below 2 kHz, characterize the soundscape at sunset during the summer. Because there are 13 vessel passages per hour on average, causing acoustic interference with fish choruses 46% of the time, this MPA cannot be considered to be protected from noise. On the basis of the high seasonal variability of the soundscape components, this study proposes a one-year acoustic monitoring protocol using the soundscape methodology approach and discusses the concept of MPA size.

  1. Temporal patterns in the soundscape of the shallow waters of a Mediterranean marine protected area

    NASA Astrophysics Data System (ADS)

    Buscaino, Giuseppa; Ceraulo, Maria; Pieretti, Nadia; Corrias, Valentina; Farina, Almo; Filiciotto, Francesco; Maccarrone, Vincenzo; Grammauta, Rosario; Caruso, Francesco; Giuseppe, Alonge; Mazzola, Salvatore

    2016-09-01

    The study of marine soundscapes is an emerging field of research that contributes important information about biological compositions and environmental conditions. The seasonal and circadian soundscape trends of a marine protected area (MPA) in the Mediterranean Sea have been studied for one year using an autonomous acoustic recorder. Frequencies less than 1 kHz are dominated by noise generated by waves and are louder during the winter; conversely, higher frequencies (4-96 kHz) are dominated by snapping shrimp, which increase their acoustic activity at night during the summer. Fish choruses, below 2 kHz, characterize the soundscape at sunset during the summer. Because there are 13 vessel passages per hour on average, causing acoustic interference with fish choruses 46% of the time, this MPA cannot be considered to be protected from noise. On the basis of the high seasonal variability of the soundscape components, this study proposes a one-year acoustic monitoring protocol using the soundscape methodology approach and discusses the concept of MPA size.

  2. Temporal patterns in the soundscape of the shallow waters of a Mediterranean marine protected area

    PubMed Central

    Buscaino, Giuseppa; Ceraulo, Maria; Pieretti, Nadia; Corrias, Valentina; Farina, Almo; Filiciotto, Francesco; Maccarrone, Vincenzo; Grammauta, Rosario; Caruso, Francesco; Giuseppe, Alonge; Mazzola, Salvatore

    2016-01-01

    The study of marine soundscapes is an emerging field of research that contributes important information about biological compositions and environmental conditions. The seasonal and circadian soundscape trends of a marine protected area (MPA) in the Mediterranean Sea have been studied for one year using an autonomous acoustic recorder. Frequencies less than 1 kHz are dominated by noise generated by waves and are louder during the winter; conversely, higher frequencies (4–96 kHz) are dominated by snapping shrimp, which increase their acoustic activity at night during the summer. Fish choruses, below 2 kHz, characterize the soundscape at sunset during the summer. Because there are 13 vessel passages per hour on average, causing acoustic interference with fish choruses 46% of the time, this MPA cannot be considered to be protected from noise. On the basis of the high seasonal variability of the soundscape components, this study proposes a one-year acoustic monitoring protocol using the soundscape methodology approach and discusses the concept of MPA size. PMID:27677956

  3. Social, contextual, and individual factors affecting the occurrence and acoustic structure of drumming bouts in wild chimpanzees (Pan troglodytes).

    PubMed

    Babiszewska, Magdalena; Schel, Anne Marijke; Wilke, Claudia; Slocombe, Katie E

    2015-01-01

    The production of structured and repetitive sounds by striking objects is a behavior found not only in humans, but also in a variety of animal species, including chimpanzees (Pan troglodytes). In this study we examined individual and social factors that may influence the frequency with which individuals engage in drumming behavior when producing long distance pant hoot vocalizations, and analyzed the temporal structure of those drumming bouts. Male chimpanzees from Budongo Forest, Uganda, drummed significantly more frequently during travel than feeding or resting and older individuals were significantly more likely to produce drumming bouts than younger ones. In contrast, we found no evidence that the presence of estrus females, high ranking males and preferred social partners in the caller's vicinty had an effect on the frequency with which an individual accompanied their pant hoot vocalization with drumming. Through acoustic analyses, we demonstrated that drumming sequences produced with pant hoots may have contained information on individual identity and that qualitatively, there was individual variation in the complexity of the temporal patterns produced. We conclude that drumming patterns may act as individually distinctive long-distance signals that, together with pant hoot vocalizations, function to coordinate the movement and spacing of dispersed individuals within a community, rather than as signals to group members in the immediate audience. © 2014 Wiley Periodicals, Inc.

  4. Validation and Simulation of ARES I Scale Model Acoustic Test -1- Pathfinder Development

    NASA Technical Reports Server (NTRS)

    Putnam, G. C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Within this first of a series of papers, results from ASMAT simulations with the rocket in a held down configuration and without water suppression have then been compared to acoustic data collected from similar live-fire tests to assess the accuracy of the simulations. Detailed evaluations of the mesh features, mesh length scales relative to acoustic signals, Courant-Friedrichs-Lewy numbers, and spatial residual sources have been performed to support this assessment. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure. Finally, acoustic propagation patterns illustrated a previously unconsidered issue of tower placement inline with the high intensity overpressure propagation path.

  5. Spectral and temporal resolutions of information-bearing acoustic changes for understanding vocoded sentencesa)

    PubMed Central

    Stilp, Christian E.; Goupell, Matthew J.

    2015-01-01

    Short-time spectral changes in the speech signal are important for understanding noise-vocoded sentences. These information-bearing acoustic changes, measured using cochlea-scaled entropy in cochlear implant simulations [CSECI; Stilp et al. (2013). J. Acoust. Soc. Am. 133(2), EL136–EL141; Stilp (2014). J. Acoust. Soc. Am. 135(3), 1518–1529], may offer better understanding of speech perception by cochlear implant (CI) users. However, perceptual importance of CSECI for normal-hearing listeners was tested at only one spectral resolution and one temporal resolution, limiting generalizability of results to CI users. Here, experiments investigated the importance of these informational changes for understanding noise-vocoded sentences at different spectral resolutions (4–24 spectral channels; Experiment 1), temporal resolutions (4–64 Hz cutoff for low-pass filters that extracted amplitude envelopes; Experiment 2), or when both parameters varied (6–12 channels, 8–32 Hz; Experiment 3). Sentence intelligibility was reduced more by replacing high-CSECI intervals with noise than replacing low-CSECI intervals, but only when sentences had sufficient spectral and/or temporal resolution. High-CSECI intervals were more important for speech understanding as spectral resolution worsened and temporal resolution improved. Trade-offs between CSECI and intermediate spectral and temporal resolutions were minimal. These results suggest that signal processing strategies that emphasize information-bearing acoustic changes in speech may improve speech perception for CI users. PMID:25698018

  6. Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales

    PubMed Central

    Tian, Xing; Rowland, Jess; Poeppel, David

    2017-01-01

    Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4–7 Hz) and gamma band ranges (31–45 Hz) but, contrary to expectation, not at the timescale corresponding to alpha (8–12 Hz), which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods) and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least) a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations. PMID:29095816

  7. Pulsed focused ultrasound-induced displacements in confined in vitro blood clots.

    PubMed

    Wright, Cameron C; Hynynen, Kullervo; Goertz, David E

    2012-03-01

    Ultrasound has been shown to potentiate the effects of tissue plasminogen activator to improve clot lysis in a range of in vitro and in vivo studies as well as in clinical trials. One possible mechanism of action is acoustic radiation force-induced clot displacements. In this study, we investigate the temporal and spatial dynamics of clot displacements and strain initiated by focused ultrasound pulses. Displacements were produced by a 1.51 MHz f-number 1 transducer over a range of acoustic powers (1-85 W) in clots constrained within an agar vessel phantom channel. Displacements were tracked during and after a 5.45 ms therapy pulse using a 20 MHz high-frequency ultrasound imaging probe. Peak thrombus displacements were found to be linear as a function of acoustic power up to 60 W before leveling off near 128 μm for the highest transmit powers. The time to peak displacement and recovery time of blood clots was largely independent of acoustic powers with measured values near 2 ms. A linear relationship between peak axial strain and transmit power was observed, reaching a peak value of 11% at 35 W. The peak strain occurred ~0.75 mm from the focal zone for all powers investigated in both lateral and axial directions. These results indicate that substantial displacements can be induced by focused ultrasound in confined blood clots, and that the spatial and temporal displacement patterns are complex and highly dependent on exposure conditions, which has implications for future work investigating their link to clot lysis and for developing approaches to exploit these effects.

  8. High-speed imaging, acoustic features, and aeroacoustic computations of jet noise from Strombolian (and Vulcanian) explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Sesterhenn, J.; Scarlato, P.; Stampka, K.; Del Bello, E.; Pena Fernandez, J. J.; Gaudin, D.

    2014-05-01

    High-speed imaging of explosive eruptions at Stromboli (Italy), Fuego (Guatemala), and Yasur (Vanuatu) volcanoes allowed visualization of pressure waves from seconds-long explosions. From the explosion jets, waves radiate with variable geometry, timing, and apparent direction and velocity. Both the explosion jets and their wave fields are replicated well by numerical simulations of supersonic jets impulsively released from a pressurized vessel. The scaled acoustic signal from one explosion at Stromboli displays a frequency pattern with an excellent match to those from the simulated jets. We conclude that both the observed waves and the audible sound from the explosions are jet noise, i.e., the typical acoustic field radiating from high-velocity jets. Volcanic jet noise was previously quantified only in the infrasonic emissions from large, sub-Plinian to Plinian eruptions. Our combined approach allows us to define the spatial and temporal evolution of audible jet noise from supersonic jets in small-scale volcanic eruptions.

  9. Acoustic habitat of an oceanic archipelago in the Southwestern Atlantic

    NASA Astrophysics Data System (ADS)

    Bittencourt, Lis; Barbosa, Mariana; Secchi, Eduardo; Lailson-Brito, José; Azevedo, Alexandre

    2016-09-01

    Underwater soundscapes can be highly variable, and in natural conditions are often dominated by biological signals and physical features of the environment. Few studies, however, focused on oceanic islands soundscapes. Islands in the middle of ocean basins can provide a good example of how untouched marine soundscapes are. Autonomous acoustic recordings were carried out in two different seasons in Trindade-Martin Vaz Archipelago, Southwestern Atlantic, providing nearly continuous data for both periods. Sound levels varied daily and between seasons. During summer, higher frequencies were noisier than lower frequencies, with snapping shrimp being the dominating sound source. During winter, lower frequencies were noisier than higher frequencies due to humpback whale constant singing. Biological signal detection had a marked temporal pattern, playing an important role in the soundscape. Over 1000 humpback whale sounds were detected hourly during winter. Fish vocalizations were detected mostly during night time during both summer and winter. The results show an acoustic habitat dominated by biological sound sources and highlight the importance of the island to humpback whales in winter.

  10. Perception and the temporal properties of speech

    NASA Astrophysics Data System (ADS)

    Gordon, Peter C.

    1991-11-01

    Four experiments addressing the role of attention in phonetic perception are reported. The first experiment shows that the relative importance of two cues to the voicing distinction changes when subjects must perform an arithmetic distractor task at the same time as identifying a speech stimulus. The voice onset time cue loses phonetic significance when subjects are distracted, while the F0 onset frequency cue does not. The second experiment shows a similar pattern for two cues to the distinction between the vowels /i/ (as in 'beat') and /I/ (as in 'bit'). Together these experiments indicate that careful attention to speech perception is necessary for strong acoustic cues to achieve their full phonetic impact, while weaker acoustic cues achieve their full phonetic impact without close attention. Experiment 3 shows that this pattern is obtained when the distractor task places little demand on verbal short term memory. Experiment 4 provides a large data set for testing formal models of the role of attention in speech perception. Attention is shown to influence the signal to noise ratio in phonetic encoding. This principle is instantiated in a network model in which the role of attention is to reduce noise in the phonetic encoding of acoustic cues. Implications of this work for understanding speech perception and general theories of the role of attention in perception are discussed.

  11. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound

    PubMed Central

    van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales (Balaenoptera musculus intermedia), fin whales (Balaenoptera physalus), Antarctic minke whales (Balaenoptera bonaerensis) and leopard seals (Hydrurga leptonyx). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton. PMID:28280544

  12. Comparative Use of a Caribbean Mesophotic Coral Ecosystem and Association with Fish Spawning Aggregations by Three Species of Shark.

    PubMed

    Pickard, Alexandria E; Vaudo, Jeremy J; Wetherbee, Bradley M; Nemeth, Richard S; Blondeau, Jeremiah B; Kadison, Elizabeth A; Shivji, Mahmood S

    2016-01-01

    Understanding of species interactions within mesophotic coral ecosystems (MCEs; ~ 30-150 m) lags well behind that for shallow coral reefs. MCEs are often sites of fish spawning aggregations (FSAs) for a variety of species, including many groupers. Such reproductive fish aggregations represent temporal concentrations of potential prey that may be drivers of habitat use by predatory species, including sharks. We investigated movements of three species of sharks within a MCE and in relation to FSAs located on the shelf edge south of St. Thomas, United States Virgin Islands. Movements of 17 tiger (Galeocerdo cuvier), seven lemon (Negaprion brevirostris), and six Caribbean reef (Carcharhinus perezi) sharks tagged with acoustic transmitters were monitored within the MCE using an array of acoustic receivers spanning an area of 1,060 km2 over a five year period. Receivers were concentrated around prominent grouper FSAs to monitor movements of sharks in relation to these temporally transient aggregations. Over 130,000 detections of telemetered sharks were recorded, with four sharks tracked in excess of 3 years. All three shark species were present within the MCE over long periods of time and detected frequently at FSAs, but patterns of MCE use and orientation towards FSAs varied both spatially and temporally among species. Lemon sharks moved over a large expanse of the MCE, but concentrated their activities around FSAs during grouper spawning and were present within the MCE significantly more during grouper spawning season. Caribbean reef sharks were present within a restricted portion of the MCE for prolonged periods of time, but were also absent for long periods. Tiger sharks were detected throughout the extent of the acoustic array, with the MCE representing only portion of their habitat use, although a high degree of individual variation was observed. Our findings indicate that although patterns of use varied, all three species of sharks repeatedly utilized the MCE and as upper trophic level predators they are likely involved in a range of interactions with other members of MCEs.

  13. Comparative Use of a Caribbean Mesophotic Coral Ecosystem and Association with Fish Spawning Aggregations by Three Species of Shark

    PubMed Central

    Pickard, Alexandria E.; Vaudo, Jeremy J.; Wetherbee, Bradley M.; Nemeth, Richard S.; Blondeau, Jeremiah B.; Kadison, Elizabeth A.; Shivji, Mahmood S.

    2016-01-01

    Understanding of species interactions within mesophotic coral ecosystems (MCEs; ~ 30–150 m) lags well behind that for shallow coral reefs. MCEs are often sites of fish spawning aggregations (FSAs) for a variety of species, including many groupers. Such reproductive fish aggregations represent temporal concentrations of potential prey that may be drivers of habitat use by predatory species, including sharks. We investigated movements of three species of sharks within a MCE and in relation to FSAs located on the shelf edge south of St. Thomas, United States Virgin Islands. Movements of 17 tiger (Galeocerdo cuvier), seven lemon (Negaprion brevirostris), and six Caribbean reef (Carcharhinus perezi) sharks tagged with acoustic transmitters were monitored within the MCE using an array of acoustic receivers spanning an area of 1,060 km2 over a five year period. Receivers were concentrated around prominent grouper FSAs to monitor movements of sharks in relation to these temporally transient aggregations. Over 130,000 detections of telemetered sharks were recorded, with four sharks tracked in excess of 3 years. All three shark species were present within the MCE over long periods of time and detected frequently at FSAs, but patterns of MCE use and orientation towards FSAs varied both spatially and temporally among species. Lemon sharks moved over a large expanse of the MCE, but concentrated their activities around FSAs during grouper spawning and were present within the MCE significantly more during grouper spawning season. Caribbean reef sharks were present within a restricted portion of the MCE for prolonged periods of time, but were also absent for long periods. Tiger sharks were detected throughout the extent of the acoustic array, with the MCE representing only portion of their habitat use, although a high degree of individual variation was observed. Our findings indicate that although patterns of use varied, all three species of sharks repeatedly utilized the MCE and as upper trophic level predators they are likely involved in a range of interactions with other members of MCEs. PMID:27144275

  14. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions. PMID:25628545

  15. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization.

    PubMed

    Bednar, Adam; Boland, Francis M; Lalor, Edmund C

    2017-03-01

    The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Bayesian Ising approximation for learning dictionaries of multispike timing patterns in premotor neurons

    NASA Astrophysics Data System (ADS)

    Hernandez Lahme, Damian; Sober, Samuel; Nemenman, Ilya

    Important questions in computational neuroscience are whether, how much, and how information is encoded in the precise timing of neural action potentials. We recently demonstrated that, in the premotor cortex during vocal control in songbirds, spike timing is far more informative about upcoming behavior than is spike rate (Tang et al, 2014). However, identification of complete dictionaries that relate spike timing patterns with the controled behavior remains an elusive problem. Here we present a computational approach to deciphering such codes for individual neurons in the songbird premotor area RA, an analog of mammalian primary motor cortex. Specifically, we analyze which multispike patterns of neural activity predict features of the upcoming vocalization, and hence are important codewords. We use a recently introduced Bayesian Ising Approximation, which properly accounts for the fact that many codewords overlap and hence are not independent. Our results show which complex, temporally precise multispike combinations are used by individual neurons to control acoustic features of the produced song, and that these code words are different across individual neurons and across different acoustic features. This work was supported, in part, by JSMF Grant 220020321, NSF Grant 1208126, NIH Grant NS084844 and NIH Grant 1 R01 EB022872.

  17. Opposite patterns of hemisphere dominance for early auditory processing of lexical tones and consonants

    PubMed Central

    Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin

    2006-01-01

    In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136

  18. Mesoscale variations in acoustic signals induced by atmospheric gravity waves.

    PubMed

    Chunchuzov, Igor; Kulichkov, Sergey; Perepelkin, Vitaly; Ziemann, Astrid; Arnold, Klaus; Kniffka, Anke

    2009-02-01

    The results of acoustic tomographic monitoring of the coherent structures in the lower atmosphere and the effects of these structures on acoustic signal parameters are analyzed in the present study. From the measurements of acoustic travel time fluctuations (periods 1 min-1 h) with distant receivers, the temporal fluctuations of the effective sound speed and wind speed are retrieved along different ray paths connecting an acoustic pulse source and several receivers. By using a coherence analysis of the fluctuations near spatially distanced ray turning points, the internal wave-associated fluctuations are filtered and their spatial characteristics (coherences, horizontal phase velocities, and spatial scales) are estimated. The capability of acoustic tomography in estimating wind shear near ground is shown. A possible mechanism describing the temporal modulation of the near-ground wind field by ducted internal waves in the troposphere is proposed.

  19. Joint inversion for transponder localization and sound-speed profile temporal variation in high-precision acoustic surveys.

    PubMed

    Li, Zhao; Dosso, Stan E; Sun, Dajun

    2016-07-01

    This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.

  20. Temporal characteristics of Punjabi word-medial singletons and geminates.

    PubMed

    Hussain, Qandeel

    2015-10-01

    Many studies have investigated the temporal characteristics of the word-medial singletons and geminates in Indo-Aryan languages. However, little is known about the acoustic cues distinguishing between the word-medial singletons and geminates of Punjabi. The present study examines the temporal characteristics of Punjabi word-medial singleton and geminate stops in a C1V1C2V2 template. The results from five Punjabi speakers showed that, unlike previous studies of Indo-Aryan languages, the durations of C2 and V2 are the most important acoustic correlates of singleton and geminate stops in Punjabi. These findings therefore point towards the cross-linguistic differences in the acoustic correlates of singletons and geminates.

  1. Spatio-temporal variation in click production rates of beaked whales: Implications for passive acoustic density estimation.

    PubMed

    Warren, Victoria E; Marques, Tiago A; Harris, Danielle; Thomas, Len; Tyack, Peter L; Aguilar de Soto, Natacha; Hickmott, Leigh S; Johnson, Mark P

    2017-03-01

    Passive acoustic monitoring has become an increasingly prevalent tool for estimating density of marine mammals, such as beaked whales, which vocalize often but are difficult to survey visually. Counts of acoustic cues (e.g., vocalizations), when corrected for detection probability, can be translated into animal density estimates by applying an individual cue production rate multiplier. It is essential to understand variation in these rates to avoid biased estimates. The most direct way to measure cue production rate is with animal-mounted acoustic recorders. This study utilized data from sound recording tags deployed on Blainville's (Mesoplodon densirostris, 19 deployments) and Cuvier's (Ziphius cavirostris, 16 deployments) beaked whales, in two locations per species, to explore spatial and temporal variation in click production rates. No spatial or temporal variation was detected within the average click production rate of Blainville's beaked whales when calculated over dive cycles (including silent periods between dives); however, spatial variation was detected when averaged only over vocal periods. Cuvier's beaked whales exhibited significant spatial and temporal variation in click production rates within vocal periods and when silent periods were included. This evidence of variation emphasizes the need to utilize appropriate cue production rates when estimating density from passive acoustic data.

  2. Seasonal and Geographic Variation of Southern Blue Whale Subspecies in the Indian Ocean

    PubMed Central

    Samaran, Flore; Stafford, Kathleen M.; Branch, Trevor A.; Gedamke, Jason; Royer, Jean-Yves; Dziak, Robert P.; Guinet, Christophe

    2013-01-01

    Understanding the seasonal movements and distribution patterns of migratory species over ocean basin scales is vital for appropriate conservation and management measures. However, assessing populations over remote regions is challenging, particularly if they are rare. Blue whales (Balaenoptera musculus spp) are an endangered species found in the Southern and Indian Oceans. Here two recognized subspecies of blue whales and, based on passive acoustic monitoring, four “acoustic populations” occur. Three of these are pygmy blue whale (B.m. brevicauda) populations while the fourth is the Antarctic blue whale (B.m. intermedia). Past whaling catches have dramatically reduced their numbers but recent acoustic recordings show that these oceans are still important habitat for blue whales. Presently little is known about the seasonal movements and degree of overlap of these four populations, particularly in the central Indian Ocean. We examined the geographic and seasonal occurrence of different blue whale acoustic populations using one year of passive acoustic recording from three sites located at different latitudes in the Indian Ocean. The vocalizations of the different blue whale subspecies and acoustic populations were recorded seasonally in different regions. For some call types and locations, there was spatial and temporal overlap, particularly between Antarctic and different pygmy blue whale acoustic populations. Except on the southernmost hydrophone, all three pygmy blue whale acoustic populations were found at different sites or during different seasons, which further suggests that these populations are generally geographically distinct. This unusual blue whale diversity in sub-Antarctic and sub-tropical waters indicates the importance of the area for blue whales in these former whaling grounds. PMID:23967221

  3. Seasonal and geographic variation of southern blue whale subspecies in the Indian Ocean.

    PubMed

    Samaran, Flore; Stafford, Kathleen M; Branch, Trevor A; Gedamke, Jason; Royer, Jean-Yves; Dziak, Robert P; Guinet, Christophe

    2013-01-01

    Understanding the seasonal movements and distribution patterns of migratory species over ocean basin scales is vital for appropriate conservation and management measures. However, assessing populations over remote regions is challenging, particularly if they are rare. Blue whales (Balaenoptera musculus spp) are an endangered species found in the Southern and Indian Oceans. Here two recognized subspecies of blue whales and, based on passive acoustic monitoring, four "acoustic populations" occur. Three of these are pygmy blue whale (B.m. brevicauda) populations while the fourth is the Antarctic blue whale (B.m. intermedia). Past whaling catches have dramatically reduced their numbers but recent acoustic recordings show that these oceans are still important habitat for blue whales. Presently little is known about the seasonal movements and degree of overlap of these four populations, particularly in the central Indian Ocean. We examined the geographic and seasonal occurrence of different blue whale acoustic populations using one year of passive acoustic recording from three sites located at different latitudes in the Indian Ocean. The vocalizations of the different blue whale subspecies and acoustic populations were recorded seasonally in different regions. For some call types and locations, there was spatial and temporal overlap, particularly between Antarctic and different pygmy blue whale acoustic populations. Except on the southernmost hydrophone, all three pygmy blue whale acoustic populations were found at different sites or during different seasons, which further suggests that these populations are generally geographically distinct. This unusual blue whale diversity in sub-Antarctic and sub-tropical waters indicates the importance of the area for blue whales in these former whaling grounds.

  4. Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions

    PubMed Central

    Leech, Robert; Holt, Lori L.; Devlin, Joseph T.; Dick, Frederic

    2009-01-01

    Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial non-linguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the non-speech sounds predicted the change in pre- to post-training activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. PMID:19386919

  5. Trackline and point detection probabilities for acoustic surveys of Cuvier's and Blainville's beaked whales.

    PubMed

    Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha

    2013-09-01

    Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.

  6. Amplitude modulation detection by human listeners in sound fields.

    PubMed

    Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Srinivasan, Nirmal

    2011-10-01

    The temporal modulation transfer function (TMTF) approach allows techniques from linear systems analysis to be used to predict how the auditory system will respond to arbitrary patterns of amplitude modulation (AM). Although this approach forms the basis for a standard method of predicting speech intelligibility based on estimates of the acoustical modulation transfer function (MTF) between source and receiver, human sensitivity to AM as characterized by the TMTF has not been extensively studied under realistic listening conditions, such as in reverberant sound fields. Here, TMTFs (octave bands from 2 - 512 Hz) were obtained in 3 listening conditions simulated using virtual auditory space techniques: diotic, anechoic sound field, reverberant room sound field. TMTFs were then related to acoustical MTFs estimated using two different methods in each of the listening conditions. Both diotic and anechoic data were found to be in good agreement with classic results, but AM thresholds in the reverberant room were lower than predictions based on acoustical MTFs. This result suggests that simple linear systems techniques may not be appropriate for predicting TMTFs from acoustical MTFs in reverberant sound fields, and may be suggestive of mechanisms that functionally enhance modulation during reverberant listening.

  7. Acoustic structure of the five perceptual dimensions of timbre in orchestral instrument tones

    PubMed Central

    Elliott, Taffeta M.; Hamilton, Liberty S.; Theunissen, Frédéric E.

    2013-01-01

    Attempts to relate the perceptual dimensions of timbre to quantitative acoustical dimensions have been tenuous, leading to claims that timbre is an emergent property, if measurable at all. Here, a three-pronged analysis shows that the timbre space of sustained instrument tones occupies 5 dimensions and that a specific combination of acoustic properties uniquely determines gestalt perception of timbre. Firstly, multidimensional scaling (MDS) of dissimilarity judgments generated a perceptual timbre space in which 5 dimensions were cross-validated and selected by traditional model comparisons. Secondly, subjects rated tones on semantic scales. A discriminant function analysis (DFA) accounting for variance of these semantic ratings across instruments and between subjects also yielded 5 significant dimensions with similar stimulus ordination. The dimensions of timbre space were then interpreted semantically by rotational and reflectional projection of the MDS solution into two DFA dimensions. Thirdly, to relate this final space to acoustical structure, the perceptual MDS coordinates of each sound were regressed with its joint spectrotemporal modulation power spectrum. Sound structures correlated significantly with distances in perceptual timbre space. Contrary to previous studies, most perceptual timbre dimensions are not the result of purely temporal or spectral features but instead depend on signature spectrotemporal patterns. PMID:23297911

  8. Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field.

    PubMed

    Holtzman, Benjamin K; Paté, Arthur; Paisley, John; Waldhauser, Felix; Repetto, Douglas

    2018-05-01

    The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < M L < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity.

  9. Phase Synchronization and Desynchronization of Structural Response Induced by Turbulent and External Sound

    NASA Technical Reports Server (NTRS)

    Maestrello, Lucio

    2002-01-01

    Acoustic and turbulent boundary layer flow loadings over a flexible structure are used to study the spatial-temporal dynamics of the response of the structure. The stability of the spatial synchronization and desynchronization by an active external force is investigated with an array of coupled transducers on the structure. In the synchronous state, the structural phase is locked, which leads to the formation of spatial patterns while the amplitude peaks exhibit chaotic behaviors. Large amplitude, spatially symmetric loading is superimposed on broadband, but in the desynchronized state, the spectrum broadens and the phase space is lost. The resulting pattern bears a striking resemblance to phase turbulence. The transition is achieved by using a low power external actuator to trigger broadband behaviors from the knowledge of the external acoustic load inducing synchronization. The changes are made favorably and efficiently to alter the frequency distribution of power, not the total power level. Before synchronization effects are seen, the panel response to the turbulent boundary layer loading is discontinuously spatio-temporally correlated. The stability develops from different competing wavelengths; the spatial scale is significantly shorter than when forced with the superimposed external sound. When the external sound level decreases and the synchronized phases are lost, changes in the character of the spectra can be linked to the occurrence of spatial phase transition. These changes can develop broadband response. Synchronized responses of fuselage structure panels have been observed in subsonic and supersonic aircraft; results from two flights tests are discussed.

  10. Detection of a Novel Mechanism of Acousto-Optic Modulation of Incoherent Light

    PubMed Central

    Jarrett, Christopher W.; Caskey, Charles F.; Gore, John C.

    2014-01-01

    A novel form of acoustic modulation of light from an incoherent source has been detected in water as well as in turbid media. We demonstrate that patterns of modulated light intensity appear to propagate as the optical shadow of the density variations caused by ultrasound within an illuminated ultrasonic focal zone. This pattern differs from previous reports of acousto-optical interactions that produce diffraction effects that rely on phase shifts and changes in light directions caused by the acoustic modulation. Moreover, previous studies of acousto-optic interactions have mainly reported the effects of sound on coherent light sources via photon tagging, and/or the production of diffraction phenomena from phase effects that give rise to discrete sidebands. We aimed to assess whether the effects of ultrasound modulation of the intensity of light from an incoherent light source could be detected directly, and how the acoustically modulated (AOM) light signal depended on experimental parameters. Our observations suggest that ultrasound at moderate intensities can induce sufficiently large density variations within a uniform medium to cause measurable modulation of the intensity of an incoherent light source by absorption. Light passing through a region of high intensity ultrasound then produces a pattern that is the projection of the density variations within the region of their interaction. The patterns exhibit distinct maxima and minima that are observed at locations much different from those predicted by Raman-Nath, Bragg, or other diffraction theory. The observed patterns scaled appropriately with the geometrical magnification and sound wavelength. We conclude that these observed patterns are simple projections of the ultrasound induced density changes which cause spatial and temporal variations of the optical absorption within the illuminated sound field. These effects potentially provide a novel method for visualizing sound fields and may assist the interpretation of other hybrid imaging methods. PMID:25105880

  11. Potential for application of an acoustic camera in particle tracking velocimetry.

    PubMed

    Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim

    2008-11-01

    We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.

  12. Transient Auditory Storage of Acoustic Details Is Associated with Release of Speech from Informational Masking in Reverberant Conditions

    ERIC Educational Resources Information Center

    Huang, Ying; Huang, Qiang; Chen, Xun; Wu, Xihong; Li, Liang

    2009-01-01

    Perceptual integration of the sound directly emanating from the source with reflections needs both temporal storage and correlation computation of acoustic details. We examined whether the temporal storage is frequency dependent and associated with speech unmasking. In Experiment 1, a break in correlation (BIC) between interaurally correlated…

  13. Transcranial cavitation-mediated ultrasound therapy at sub-MHz frequency via temporal interference modulation

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Sutton, Jonathan T.; Power, Chanikarn; Zhang, Yongzhi; Miller, Eric L.; McDannold, Nathan J.

    2017-10-01

    Sub-megahertz transmission is not usually adopted in pre-clinical small animal experiments for focused ultrasound (FUS) brain therapy due to the large focal size. However, low frequency FUS is vital for preclinical evaluations due to the frequency-dependence of cavitation behavior. To maximize clinical relevance, a dual-aperture FUS system was designed for low-frequency (274.3 kHz) cavitation-mediated FUS therapy. Combining two spherically curved transducers provides significantly improved focusing in the axial direction while yielding an interference pattern with strong side lobes, leading to inhomogeneously distributed cavitation activities. By operating the two transducers at slightly offset frequencies to modulate this interference pattern over the period of sonication, the acoustic energy was redistributed and resulted in a spatially homogenous treatment profile. Simulation and pressure field measurements in water were performed to assess the beam profiles. In addition, the system performance was demonstrated in vivo in rats via drug delivery through microbubble-mediated blood-brain barrier disruption. This design resulted in a homogenous treatment profile that was fully contained within the rat brain at a clinically relevant acoustic frequency.

  14. Investigation into the Effect of Acoustic Radiation Force and Acoustic Streaming on Particle Patterning in Acoustic Standing Wave Fields

    PubMed Central

    Yang, Yanye; Ni, Zhengyang; Guo, Xiasheng; Luo, Linjiao; Tu, Juan; Zhang, Dong

    2017-01-01

    Acoustic standing waves have been widely used in trapping, patterning, and manipulating particles, whereas one barrier remains: the lack of understanding of force conditions on particles which mainly include acoustic radiation force (ARF) and acoustic streaming (AS). In this paper, force conditions on micrometer size polystyrene microspheres in acoustic standing wave fields were investigated. The COMSOL® Mutiphysics particle tracing module was used to numerically simulate force conditions on various particles as a function of time. The velocity of particle movement was experimentally measured using particle imaging velocimetry (PIV). Through experimental and numerical simulation, the functions of ARF and AS in trapping and patterning were analyzed. It is shown that ARF is dominant in trapping and patterning large particles while the impact of AS increases rapidly with decreasing particle size. The combination of using both ARF and AS for medium size particles can obtain different patterns with only using ARF. Findings of the present study will aid the design of acoustic-driven microfluidic devices to increase the diversity of particle patterning. PMID:28753955

  15. Acoustic tweezers: patterning cells and microparticles using standing surface acoustic waves (SSAW).

    PubMed

    Shi, Jinjie; Ahmed, Daniel; Mao, Xiaole; Lin, Sz-Chin Steven; Lawit, Aitan; Huang, Tony Jun

    2009-10-21

    Here we present an active patterning technique named "acoustic tweezers" that utilizes standing surface acoustic wave (SSAW) to manipulate and pattern cells and microparticles. This technique is capable of patterning cells and microparticles regardless of shape, size, charge or polarity. Its power intensity, approximately 5x10(5) times lower than that of optical tweezers, compares favorably with those of other active patterning methods. Flow cytometry studies have revealed it to be non-invasive. The aforementioned advantages, along with this technique's simple design and ability to be miniaturized, render the "acoustic tweezers" technique a promising tool for various applications in biology, chemistry, engineering, and materials science.

  16. Phase-Locked Responses to Speech in Human Auditory Cortex are Enhanced During Comprehension

    PubMed Central

    Peelle, Jonathan E.; Gross, Joachim; Davis, Matthew H.

    2013-01-01

    A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners’ ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction. PMID:22610394

  17. Phase-locked responses to speech in human auditory cortex are enhanced during comprehension.

    PubMed

    Peelle, Jonathan E; Gross, Joachim; Davis, Matthew H

    2013-06-01

    A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.

  18. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    PubMed

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Autocorrelation factors and intelligibility of Japanese monosyllables in individuals with sensorineural hearing loss.

    PubMed

    Shimokura, Ryota; Akasaka, Sakie; Nishimura, Tadashi; Hosoi, Hiroshi; Matsui, Toshie

    2017-02-01

    Some Japanese monosyllables contain consonants that are not easily discernible for individuals with sensorineural hearing loss. However, the acoustic features that make these monosyllables difficult to discern have not been clearly identified. Here, this study used the autocorrelation function (ACF), which can capture temporal features of signals, to clarify the factors influencing speech intelligibility. For each monosyllable, five factors extracted from the ACF [Φ(0): total energy; τ 1 and ϕ 1 : delay time and amplitude of the maximum peak; τ e : effective duration; W ϕ (0) : spectral centroid], voice onset time, speech intelligibility index, and loudness level were compared with the percentage of correctly perceived articulations (144 ears) obtained by 50 Japanese vowel and consonant-vowel monosyllables produced by one female speaker. Results showed that median effective duration [(τ e ) med ] was strongly correlated with the percentage of correctly perceived articulations of the consonants (r = 0.87, p < 0.01). (τ e ) med values were computed by running ACFs with the time lag at which the magnitude of the logarithmic-ACF envelope had decayed to -10 dB. Effective duration is a measure of temporal pattern persistence, i.e., the duration over which the waveform maintains a stable pattern. The authors postulate that low recognition ability is related to degraded perception of temporal fluctuation patterns.

  20. By the Light of the Moon: North Pacific Dolphins Optimize Foraging with the Lunar Cycle

    NASA Astrophysics Data System (ADS)

    Simonis, Anne Elizabeth

    The influence of the lunar cycle on dolphin foraging behavior was investigated in the productive, southern California Current Ecosystem and the oligotrophic Hawaiian Archipelago. Passive acoustic recordings from 2009 to 2015 were analyzed to document the presence of echolocation from four dolphin species that demonstrate distinct foraging preferences and diving abilities. Visual observations of dolphins, cloud coverage, commercial landings of market squid (Doryteuthis opalescens) and acoustic backscatter of fish were also considered in the Southern California Bight. The temporal variability of echolocation is described from daily to annual timescales, with emphasis on the lunar cycle as an established behavioral driver for potential dolphin prey. For dolphins that foraged at night, the presence of echolocation was reduced during nights of the full moon and during times of night that the moon was present in the night sky. In the Southern California Bight, echolocation activity was reduced for both shallow- diving common dolphins (Delphinus delphis) and deeper-diving Risso's dolphins (Grampus griseus) during times of increased illumination. Seasonal differences in acoustic behavior for both species suggest a geographic shift in dolphin populations, shoaling scattering layers or prey switching behavior during warm months, whereby dolphins target prey that do not vertically migrate. In the Hawaiian Archipelago, deep-diving short-finned pilot whales (Globicephala macrorhynchus) and shallow-diving false killer whales (Pseudorca crassidens) also showed reduced echolocation behavior during periods of increased lunar illumination. In contrast to nocturnal foraging in the northwestern Hawaiian Islands, false killer whales in the main Hawaiian Islands mainly foraged during the day and the lunar cycle showed little influence on their nocturnal acoustic behavior. Different temporal patterns in false killer whale acoustic behavior between the main and northwestern Hawaiian Islands can likely be attributed to the presence of distinct populations or social clusters with dissimilar foraging strategies. Consistent observations of reduced acoustic activity during times of increased lunar illumination show that the lunar cycle is an important predictor for nocturnal dolphin foraging behavior. The result of this research advances the scientific understanding of how dolphins optimize their foraging behavior in response to the changing distribution and abundance of their prey.

  1. A physiologically-inspired model reproducing the speech intelligibility benefit in cochlear implant listeners with residual acoustic hearing.

    PubMed

    Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim

    2017-02-01

    This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Validation and Simulation of Ares I Scale Model Acoustic Test - 2 - Simulations at 5 Foot Elevation for Evaluation of Launch Mount Effects

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Putman, Gabriel C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Expanding from initial simulations of the ASMAT setup in a held down configuration, simulations have been performed using the Loci/CHEM computational fluid dynamics software for ASMAT tests of the vehicle at 5 ft. elevation (100 ft. real vehicle elevation) with worst case drift in the direction of the launch tower. These tests have been performed without water suppression and have compared the acoustic emissions for launch structures with and without launch mounts. In addition, simulation results have also been compared to acoustic and imagery data collected from similar live-fire tests to assess the accuracy of the simulations. Simulations have shown a marked change in the pattern of emissions after removal of the launch mount with a reduction in the overall acoustic environment experienced by the vehicle and the formation of highly directed acoustic waves moving across the platform deck. Comparisons of simulation results to live-fire test data showed good amplitude and temporal correlation and imagery comparisons over the visible and infrared wavelengths showed qualitative capture of all plume and pressure wave evolution features.

  3. Spawning site selection and contingent behavior in Common Snook, Centropomus undecimalis.

    PubMed

    Lowerre-Barbieri, Susan; Villegas-Ríos, David; Walters, Sarah; Bickford, Joel; Cooper, Wade; Muller, Robert; Trotter, Alexis

    2014-01-01

    Reproductive behavior affects spatial population structure and our ability to manage for sustainability in marine and diadromous fishes. In this study, we used fishery independent capture-based sampling to evaluate where Common Snook occurred in Tampa Bay and if it changed with spawning season, and passive acoustic telemetry to assess fine scale behavior at an inlet spawning site (2007-2009). Snook concentrated in three areas during the spawning season only one of which fell within the expected spawning habitat. Although in lower numbers, they remained in these areas throughout the winter months. Acoustically-tagged snook (n = 31) showed two seasonal patterns at the spawning site: Most fish occurred during the spawning season but several fish displayed more extended residency, supporting the capture-based findings that Common Snook exhibit facultative catadromy. Spawning site selection for iteroparous, multiple-batch spawning fishes occurs at the lifetime, annual, or intra-annual temporal scales. In this study we show colonization of a new spawning site, indicating that lifetime spawning site fidelity of Common Snook is not fixed at this fine spatial scale. However, individuals did exhibit annual and intra-seasonal spawning site fidelity to this new site over the three years studied. The number of fish at the spawning site increased in June and July (peak spawning months) and on new and full lunar phases indicating within population variability in spawning and movement patterns. Intra-seasonal patterns of detection also differed significantly with sex. Common Snook exhibited divergent migration tactics and habitat use at the annual and estuarine scales, with contingents using different overwintering habitat. Migration tactics also varied at the spawning site at the intra-seasonal scale and with sex. These results have important implications for understanding how reproductive behavior affects spatio-temporal patterns of fish abundance and their resilience to disturbance events and fishing pressure.

  4. Passive Acoustic Monitoring the Diel, Lunar, Seasonal and Tidal Patterns in the Biosonar Activity of the Indo-Pacific Humpback Dolphins (Sousa chinensis) in the Pearl River Estuary, China.

    PubMed

    Wang, Zhi-Tao; Nachtigall, Paul E; Akamatsu, Tomonari; Wang, Ke-Xiong; Wu, Yu-Ping; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang; Wang, Ding

    2015-01-01

    A growing demand for sustainable energy has led to an increase in construction of offshore windfarms. Guishan windmill farm will be constructed in the Pearl River Estuary, China, which sustains the world's largest known population of Indo-Pacific humpback dolphins (Sousa chinensis). Dolphin conservation is an urgent issue in this region. By using passive acoustic monitoring, a baseline distribution of data on this species in the Pearl River Estuary during pre-construction period had been collected. Dolphin biosonar detection and its diel, lunar, seasonal and tidal patterns were examined using a Generalized Linear Model. Significant higher echolocation detections at night than during the day, in winter-spring than in summer-autumn, at high tide than at flood tide were recognized. Significant higher echolocation detections during the new moon were recognized at night time. The diel, lunar and seasonal patterns for the echolocation encounter duration also significantly varied. These patterns could be due to the spatial-temporal variability of dolphin prey and illumination conditions. The baseline information will be useful for driving further effective action on the conservation of this species and in facilitating later assessments of the effects of the offshore windfarm on the dolphins by comparing the baseline to post construction and post mitigation efforts.

  5. Passive Acoustic Monitoring the Diel, Lunar, Seasonal and Tidal Patterns in the Biosonar Activity of the Indo-Pacific Humpback Dolphins (Sousa chinensis) in the Pearl River Estuary, China

    PubMed Central

    Wang, Zhi-Tao; Nachtigall, Paul E.; Akamatsu, Tomonari; Wang, Ke-Xiong; Wu, Yu-Ping; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang; Wang, Ding

    2015-01-01

    A growing demand for sustainable energy has led to an increase in construction of offshore windfarms. Guishan windmill farm will be constructed in the Pearl River Estuary, China, which sustains the world’s largest known population of Indo-Pacific humpback dolphins (Sousa chinensis). Dolphin conservation is an urgent issue in this region. By using passive acoustic monitoring, a baseline distribution of data on this species in the Pearl River Estuary during pre-construction period had been collected. Dolphin biosonar detection and its diel, lunar, seasonal and tidal patterns were examined using a Generalized Linear Model. Significant higher echolocation detections at night than during the day, in winter-spring than in summer-autumn, at high tide than at flood tide were recognized. Significant higher echolocation detections during the new moon were recognized at night time. The diel, lunar and seasonal patterns for the echolocation encounter duration also significantly varied. These patterns could be due to the spatial-temporal variability of dolphin prey and illumination conditions. The baseline information will be useful for driving further effective action on the conservation of this species and in facilitating later assessments of the effects of the offshore windfarm on the dolphins by comparing the baseline to post construction and post mitigation efforts. PMID:26580966

  6. Spatial and temporal trends in fin whale vocalizations recorded in the NE Pacific Ocean between 2003-2013

    PubMed Central

    Weirathmueller, Michelle J.; Stafford, Kathleen M.; Wilcock, William S. D.; Hilmo, Rose S.; Dziak, Robert P.; Tréhu, Anne M.

    2017-01-01

    In order to study the long-term stability of fin whale (Balaenoptera physalus) singing behavior, the frequency and inter-pulse interval of fin whale 20 Hz vocalizations were observed over 10 years from 2003–2013 from bottom mounted hydrophones and seismometers in the northeast Pacific Ocean. The instrument locations extended from 40°N to 48°N and 130°W to 125°W with water depths ranging from 1500–4000 m. The inter-pulse interval (IPI) of fin whale song sequences was observed to increase at a rate of 0.54 seconds/year over the decade of observation. During the same time period, peak frequency decreased at a rate of 0.17 Hz/year. Two primary call patterns were observed. During the earlier years, the more commonly observed pattern had a single frequency and single IPI. In later years, a doublet pattern emerged, with two dominant frequencies and IPIs. Many call sequences in the intervening years appeared to represent a transitional state between the two patterns. The overall trend was consistent across the entire geographical span, although some regional differences exist. Understanding changes in acoustic behavior over long time periods is needed to help establish whether acoustic characteristics can be used to help determine population identity in a widely distributed, difficult to study species such as the fin whale. PMID:29073230

  7. Spatial and temporal trends in fin whale vocalizations recorded in the NE Pacific Ocean between 2003-2013.

    PubMed

    Weirathmueller, Michelle J; Stafford, Kathleen M; Wilcock, William S D; Hilmo, Rose S; Dziak, Robert P; Tréhu, Anne M

    2017-01-01

    In order to study the long-term stability of fin whale (Balaenoptera physalus) singing behavior, the frequency and inter-pulse interval of fin whale 20 Hz vocalizations were observed over 10 years from 2003-2013 from bottom mounted hydrophones and seismometers in the northeast Pacific Ocean. The instrument locations extended from 40°N to 48°N and 130°W to 125°W with water depths ranging from 1500-4000 m. The inter-pulse interval (IPI) of fin whale song sequences was observed to increase at a rate of 0.54 seconds/year over the decade of observation. During the same time period, peak frequency decreased at a rate of 0.17 Hz/year. Two primary call patterns were observed. During the earlier years, the more commonly observed pattern had a single frequency and single IPI. In later years, a doublet pattern emerged, with two dominant frequencies and IPIs. Many call sequences in the intervening years appeared to represent a transitional state between the two patterns. The overall trend was consistent across the entire geographical span, although some regional differences exist. Understanding changes in acoustic behavior over long time periods is needed to help establish whether acoustic characteristics can be used to help determine population identity in a widely distributed, difficult to study species such as the fin whale.

  8. Dynamic speech representations in the human temporal lobe.

    PubMed

    Leonard, Matthew K; Chang, Edward F

    2014-09-01

    Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The Extended Concept Of Symmetropy And Its Application To Earthquakes And Acoustic Emissions

    NASA Astrophysics Data System (ADS)

    Nanjo, K.; Yodogawa, E.

    2003-12-01

    There is the notion of symmetropy that can be considered as a powerful tool to measure quantitatively entropic heterogeneity regarding symmetry of a pattern. It can be regarded as a quantitative measure to extract the feature of asymmetry of a pattern (Yodogawa, 1982; Nanjo et al., 2000, 2001, 2002 in press). In previous studies, symmetropy was estimated for the spatial distributions of acoustic emissions generated before the ultimate whole fracture of a rock specimen in the laboratory experiment and for the spatial distributions of earthquakes in the seismic source model with self-organized criticality (SOC). In each of these estimations, the outline of the region in which symmetropy is estimated for a pattern is determined to be equal to that of the rock specimen in which acoustic emissions are generated or that of the SOC seismic source model from which earthquakes emerge. When local seismicities like aftershocks, foreshocks and earthquake swarms in the Earth's crust are considered, it is difficult to determine objectively the outline of the region characterizing these local seismicities without the need of subjectiveness. So, the original concept of symmetropy is not appropriate to be directly applied to such local seismicities and the proper modification of the original one is needed. Here, we introduce the notion of symmetropy for the nonlinear geosciences and extend it for the purpose of the application to local seismicities such as aftershocks, foreshocks and earthquake swarms. We employ the extended concept to the spatial distributions of acoustic emissions generated in a previous laboratory experiment where the failure process in a brittle granite sample can be stabilized by controlling axial stress to maintain a constant rate of acoustic emissions and, as a result, detailed view of fracture nucleation and growth was observed. Moreover, it is applied to the temporal variations of spatial distributions of aftershocks and foreshocks of the main shocks, using natural observable data of earthquakes in and around Japan. Our results show the successful applicability of the extended concept of symmetropy to earthquakes and acoustic emissions. Furthermore, it is pointed out that the concept of symmetropy or the extended one of it might be adapted to any pattern recognition in many fields of science, particularly in the nonlinear geosciences and the sciences of complexity. References: Yodogawa, 1982, Percept. Psychophys., v. 32, p. 230-240; Nanjo et al., 2000, Forma, v. 15, p. 95-101; Nanjo et al., 2001, Forma, v. 16, p. 213-224; Nanjo et al., 2002 in press, Symmetry: Art and Science, v. 2.

  10. The role of temporal call structure in species recognition of male Allobates talamancae (Cope, 1875): (Anura: Dendrobatidae).

    PubMed

    Kollarits, Dennis; Wappl, Christian; Ringler, Max

    2017-01-30

    Acoustic species recognition in anurans depends on spectral and temporal characteristics of the advertisement call. The recognition space of a species is shaped by the likelihood of heterospecific acoustic interference. The dendrobatid frogs Allobates talamancae (Cope, 1875) and Silverstoneia flotator (Dunn, 1931) occur syntopically in south-west Costa Rica. A previous study showed that these two species avoid acoustic interference by spectral stratification. In this study, the role of the temporal call structure in the advertisement call of A. talamancae was analyzed, in particular the internote-interval duration in providing species specific temporal cues. In playback trials, artificial advertisement calls with internote-intervals deviating up to ± 90 % from the population mean internote-interval were broadcast to vocally active territorial males. The phonotactic reactions of the males indicated that, unlike in closely related species, internote-interval duration is not a call property essential for species recognition in A. talamancae . However, temporal call structure may be used for species recognition when the likelihood of heterospecific interference is high. Also, the close-encounter courtship call of male A. talamancae is described.

  11. High temporal resolution of extreme rainfall rate variability and the acoustic classification of rainfall

    NASA Astrophysics Data System (ADS)

    Nystuen, Jeffrey A.; Amitai, Eyal

    2003-04-01

    The underwater sound generated by raindrop splashes on a water surface is loud and unique allowing detection, classification and quantification of rainfall. One of the advantages of the acoustic measurement is that the listening area, an effective catchment area, is proportional to the depth of the hydrophone and can be orders of magnitude greater than other in situ rain gauges. This feature allows high temporal resolution of the rainfall measurement. A series of rain events with extremely high rainfall rates, over 100 mm/hr, is examined acoustically. Rapid onset and cessation of rainfall intensity are detected within the convective cells of these storms with maximum 5-s resolution values exceeding 1000 mm/hr. The probability distribution functions (pdf) for rainfall rate occurrence and water volume using the longer temporal resolutions typical of other instruments do not include these extreme values. The variance of sound intensity within different acoustic frequency bands can be used as an aid to classify rainfall type. Objective acoustic classification algorithms are proposed. Within each rainfall classification the relationship between sound intensity and rainfall rate is nearly linear. The reflectivity factor, Z, also has a linear relationship with rainfall rate, R, for each rainfall classification.

  12. Detection of spatio-temporal change of ocean acoustic velocity for observing seafloor crustal deformation applying seismological methods

    NASA Astrophysics Data System (ADS)

    Eto, S.; Nagai, S.; Tadokoro, K.

    2011-12-01

    Our group has developed a system for observing seafloor crustal deformation with a combination of acoustic ranging and kinematic GPS positioning techniques. One of the effective factors to reduce estimation error of submarine benchmark in our system is modeling variation of ocean acoustic velocity. We estimated various 1-dimensional velocity models with depth under some constraints, because it is difficult to estimate 3-dimensional acoustic velocity structure including temporal change due to our simple acquisition procedure of acoustic ranging data. We, then, applied the joint hypocenter determination method in seismology [Kissling et al., 1994] to acoustic ranging data. We assume two conditions as constraints in inversion procedure as follows: 1) fixed acoustic velocity in deeper part because it is usually stable both in space and time, 2) each inverted velocity model should be decreased with depth. The following two remarkable spatio-temporal changes of acoustic velocity 1) variations of travel-time residuals at the same points within short time and 2) larger differences between residuals at the neighboring points, which are one's of travel-time from different benchmarks. The First results cannot be explained only by the effect of atmospheric condition change including heating by sunlight. To verify the residual variations mentioned as the second result, we have performed forward modeling of acoustic ranging data with velocity models added velocity anomalies. We calculate travel time by a pseudo-bending ray tracing method [Um and Thurber, 1987] to examine effects of velocity anomaly on the travel-time differences. Comparison between these residuals and travel-time difference in forward modeling, velocity anomaly bodies in shallower depth can make these anomalous residuals, which may indicate moving water bodies. We need to apply an acoustic velocity structure model with velocity anomaly(s) in acoustic ranging data analysis and/or to develop a new system with a large number of sea surface stations to detect them, which may be able to reduce error of seafloor benchmarker position.

  13. Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field

    PubMed Central

    Paisley, John

    2018-01-01

    The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < ML < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity. PMID:29806015

  14. Ecological Insights from Pelagic Habitats Acquired Using Active Acoustic Techniques.

    PubMed

    Benoit-Bird, Kelly J; Lawson, Gareth L

    2016-01-01

    Marine pelagic ecosystems present fascinating opportunities for ecological investigation but pose important methodological challenges for sampling. Active acoustic techniques involve producing sound and receiving signals from organisms and other water column sources, offering the benefit of high spatial and temporal resolution and, via integration into different platforms, the ability to make measurements spanning a range of spatial and temporal scales. As a consequence, a variety of questions concerning the ecology of pelagic systems lend themselves to active acoustics, ranging from organism-level investigations and physiological responses to the environment to ecosystem-level studies and climate. As technologies and data analysis methods have matured, the use of acoustics in ecological studies has grown rapidly. We explore the continued role of active acoustics in addressing questions concerning life in the ocean, highlight creative applications to key ecological themes ranging from physiology and behavior to biogeography and climate, and discuss emerging avenues where acoustics can help determine how pelagic ecosystems function.

  15. Examination of time-reversal acoustics in shallow water and applications to noncoherent underwater communications.

    PubMed

    Smith, Kevin B; Abrantes, Antonio A M; Larraza, Andres

    2003-06-01

    The shallow water acoustic communication channel is characterized by strong signal degradation caused by multipath propagation and high spatial and temporal variability of the channel conditions. At the receiver, multipath propagation causes intersymbol interference and is considered the most important of the channel distortions. This paper examines the application of time-reversal acoustic (TRA) arrays, i.e., phase-conjugated arrays (PCAs), that generate a spatio-temporal focus of acoustic energy at the receiver location, eliminating distortions introduced by channel propagation. This technique is self-adaptive and automatically compensates for environmental effects and array imperfections without the need to explicitly characterize the environment. An attempt is made to characterize the influences of a PCA design on its focusing properties with particular attention given to applications in noncoherent underwater acoustic communication systems. Due to the PCA spatial diversity focusing properties, PC arrays may have an important role in an acoustic local area network. Each array is able to simultaneously transmit different messages that will focus only at the destination receiver node.

  16. Use of principle velocity patterns in the analysis of structural acoustic optimization.

    PubMed

    Johnson, Wayne M; Cunefare, Kenneth A

    2007-02-01

    This work presents an application of principle velocity patterns in the analysis of the structural acoustic design optimization of an eight ply composite cylindrical shell. The approach consists of performing structural acoustic optimizations of a composite cylindrical shell subject to external harmonic monopole excitation. The ply angles are used as the design variables in the optimization. The results of the ply angle design variable formulation are interpreted using the singular value decomposition of the interior acoustic potential energy. The decomposition of the acoustic potential energy provides surface velocity patterns associated with lower levels of interior noise. These surface velocity patterns are shown to correspond to those from the structural acoustic optimization results. Thus, it is demonstrated that the capacity to design multi-ply composite cylinders for quiet interiors is determined by how well the cylinder be can designed to exhibit particular surface velocity patterns associated with lower noise levels.

  17. Multistability in auditory stream segregation: a predictive coding view

    PubMed Central

    Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra

    2012-01-01

    Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621

  18. The Neural Representation of Consonant-Vowel Transitions in Adults Who Wear Hearing Aids

    PubMed Central

    Tremblay, Kelly L.; Kalstein, Laura; Billings, Cuttis J.; Souza, Pamela E.

    2006-01-01

    Hearing aids help compensate for disorders of the ear by amplifying sound; however, their effectiveness also depends on the central auditory system's ability to represent and integrate spectral and temporal information delivered by the hearing aid. The authors report that the neural detection of time-varying acoustic cues contained in speech can be recorded in adult hearing aid users using the acoustic change complex (ACC). Seven adults (50–76 years) with mild to severe sensorineural hearing participated in the study. When presented with 2 identifiable consonant-vowel (CV) syllables (“shee” and “see”), the neural detection of CV transitions (as indicated by the presence of a P1-N1-P2 response) was different for each speech sound. More specifically, the latency of the evoked neural response coincided in time with the onset of the vowel, similar to the latency patterns the authors previously reported in normal-hearing listeners. PMID:16959736

  19. Discriminating Simulated Vocal Tremor Source Using Amplitude Modulation Spectra

    PubMed Central

    Carbonell, Kathy M.; Lester, Rosemary A.; Story, Brad H.; Lotto, Andrew J.

    2014-01-01

    Objectives/Hypothesis Sources of vocal tremor are difficult to categorize perceptually and acoustically. This paper describes a preliminary attempt to discriminate vocal tremor sources through the use of spectral measures of the amplitude envelope. The hypothesis is that different vocal tremor sources are associated with distinct patterns of acoustic amplitude modulations. Study Design Statistical categorization methods (discriminant function analysis) were used to discriminate signals from simulated vocal tremor with different sources using only acoustic measures derived from the amplitude envelopes. Methods Simulations of vocal tremor were created by modulating parameters of a vocal fold model corresponding to oscillations of respiratory driving pressure (respiratory tremor), degree of vocal fold adduction (adductory tremor) and fundamental frequency of vocal fold vibration (F0 tremor). The acoustic measures were based on spectral analyses of the amplitude envelope computed across the entire signal and within select frequency bands. Results The signals could be categorized (with accuracy well above chance) in terms of the simulated tremor source using only measures of the amplitude envelope spectrum even when multiple sources of tremor were included. Conclusions These results supply initial support for an amplitude-envelope based approach to identify the source of vocal tremor and provide further evidence for the rich information about talker characteristics present in the temporal structure of the amplitude envelope. PMID:25532813

  20. Acoustic Processing of Temporally Modulated Sounds in Infants: Evidence from a Combined Near-Infrared Spectroscopy and EEG Study

    PubMed Central

    Telkemeyer, Silke; Rossi, Sonja; Nierhaus, Till; Steinbrink, Jens; Obrig, Hellmuth; Wartenburger, Isabell

    2010-01-01

    Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG) and near-infrared spectroscopy (NIRS). NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory-evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language acquisition. PMID:21716574

  1. Acoustically regulated optical emission dynamics from quantum dot-like emission centers in GaN/InGaN nanowire heterostructures

    NASA Astrophysics Data System (ADS)

    Lazić, S.; Chernysheva, E.; Hernández-Mínguez, A.; Santos, P. V.; van der Meulen, H. P.

    2018-03-01

    We report on experimental studies of the effects induced by surface acoustic waves on the optical emission dynamics of GaN/InGaN nanowire quantum dots. We employ stroboscopic optical excitation with either time-integrated or time-resolved photoluminescence detection. In the absence of the acoustic wave, the emission spectra reveal signatures originated from the recombination of neutral exciton and biexciton confined in the probed nanowire quantum dot. When the nanowire is perturbed by the propagating acoustic wave, the embedded quantum dot is periodically strained and its excitonic transitions are modulated by the acousto-mechanical coupling. Depending on the recombination lifetime of the involved optical transitions, we can resolve acoustically driven radiative processes over time scales defined by the acoustic cycle. At high acoustic amplitudes, we also observe distortions in the transmitted acoustic waveform, which are reflected in the time-dependent spectral response of our sensor quantum dot. In addition, the correlated intensity oscillations observed during temporal decay of the exciton and biexciton emission suggest an effect of the acoustic piezoelectric fields on the quantum dot charge population. The present results are relevant for the dynamic spectral and temporal control of photon emission in III-nitride semiconductor heterostructures.

  2. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2017-01-01

    Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788

  3. Temporal separation of two fin whale call types across the eastern North Pacific.

    PubMed

    Sirović, Ana; Williams, Lauren N; Kerosky, Sara M; Wiggins, Sean M; Hildebrand, John A

    2013-01-01

    Fin whales ( Balaenoptera physalus ) produce a variety of low-frequency, short-duration, frequency-modulated calls. The differences in temporal patterns between two fin whale call types are described from long-term passive acoustic data collected intermittently between 2005 and 2011 at three locations across the eastern North Pacific: the Bering Sea, off Southern California, and in Canal de Ballenas in the northern Gulf of California. Fin whale calls were detected at all sites year-round, during all periods with recordings. At all three locations, 40-Hz calls peaked in June, preceding a peak in 20-Hz calls by 3-5 months. Monitoring both call types may provide a more accurate insight into the seasonal presence of fin whales across the eastern North Pacific than can be obtained from a single call type. The 40-Hz call may be associated with a foraging function, and temporal separation between 40- and 20-Hz calls may indicate the separation between predominately feeding behavior and other social interactions.

  4. Possible Role of Mother-Daughter Vocal Interactions on the Development of Species-Specific Song in Gibbons

    PubMed Central

    Koda, Hiroki; Lemasson, Alban; Oyakawa, Chisako; Rizaldi; Pamungkas, Joko; Masataka, Nobuo

    2013-01-01

    Mother-infant vocal interactions play a crucial role in the development of human language. However, comparatively little is known about the maternal role during vocal development in nonhuman primates. Here, we report the first evidence of mother-daughter vocal interactions contributing to vocal development in gibbons, a singing and monogamous ape species. Gibbons are well known for their species-specific duets sung between mates, yet little is known about the role of intergenerational duets in gibbon song development. We observed singing interactions between free-ranging mothers and their sub-adult daughters prior to emigration. Daughters sang simultaneously with their mothers at different rates. First, we observed significant acoustic variation between daughters. Co-singing rates between mother and daughter were negatively correlated with the temporal precision of the song’s synchronization. In addition, songs of daughters who co-sang less with their mothers were acoustically more similar to the maternal song than any other adult female’s song. All variables have been reported to be influenced by social relationships of pairs. Therefore those correlations would be mediated by mother-daughter social relationship, which would be modifiable in daughter’s development. Here we hypothesized that daughters who co-sing less often, well-synchronize, and converge acoustically with the maternal acoustic pattern would be at a more advanced stage of social independence in sub-adult females prior to emigration. Second, we observed acoustic matching between mothers and daughters when co-singing, suggesting short-term vocal flexibility. Third, we found that mothers adjusted songs to a more stereotyped pattern when co-singing than when singing alone. This vocal adjustment was stronger for mothers with daughters who co-sang less. These results indicate the presence of socially mediated vocal flexibility in gibbon sub-adults and adults, and that mother-daughter co-singing interactions may enhance vocal development. More comparative work, notably longitudinal and experimental, is now needed to clarify maternal roles during song development. PMID:23951160

  5. Possible role of mother-daughter vocal interactions on the development of species-specific song in gibbons.

    PubMed

    Koda, Hiroki; Lemasson, Alban; Oyakawa, Chisako; Rizaldi; Pamungkas, Joko; Masataka, Nobuo

    2013-01-01

    Mother-infant vocal interactions play a crucial role in the development of human language. However, comparatively little is known about the maternal role during vocal development in nonhuman primates. Here, we report the first evidence of mother-daughter vocal interactions contributing to vocal development in gibbons, a singing and monogamous ape species. Gibbons are well known for their species-specific duets sung between mates, yet little is known about the role of intergenerational duets in gibbon song development. We observed singing interactions between free-ranging mothers and their sub-adult daughters prior to emigration. Daughters sang simultaneously with their mothers at different rates. First, we observed significant acoustic variation between daughters. Co-singing rates between mother and daughter were negatively correlated with the temporal precision of the song's synchronization. In addition, songs of daughters who co-sang less with their mothers were acoustically more similar to the maternal song than any other adult female's song. All variables have been reported to be influenced by social relationships of pairs. Therefore those correlations would be mediated by mother-daughter social relationship, which would be modifiable in daughter's development. Here we hypothesized that daughters who co-sing less often, well-synchronize, and converge acoustically with the maternal acoustic pattern would be at a more advanced stage of social independence in sub-adult females prior to emigration. Second, we observed acoustic matching between mothers and daughters when co-singing, suggesting short-term vocal flexibility. Third, we found that mothers adjusted songs to a more stereotyped pattern when co-singing than when singing alone. This vocal adjustment was stronger for mothers with daughters who co-sang less. These results indicate the presence of socially mediated vocal flexibility in gibbon sub-adults and adults, and that mother-daughter co-singing interactions may enhance vocal development. More comparative work, notably longitudinal and experimental, is now needed to clarify maternal roles during song development.

  6. Classifying acoustic signals into phoneme categories: average and dyslexic readers make use of complex dynamical patterns and multifractal scaling properties of the speech signal

    PubMed Central

    2015-01-01

    Several competing aetiologies of developmental dyslexia suggest that the problems with acquiring literacy skills are causally entailed by low-level auditory and/or speech perception processes. The purpose of this study is to evaluate the diverging claims about the specific deficient peceptual processes under conditions of strong inference. Theoretically relevant acoustic features were extracted from a set of artificial speech stimuli that lie on a /bAk/-/dAk/ continuum. The features were tested on their ability to enable a simple classifier (Quadratic Discriminant Analysis) to reproduce the observed classification performance of average and dyslexic readers in a speech perception experiment. The ‘classical’ features examined were based on component process accounts of developmental dyslexia such as the supposed deficit in Envelope Rise Time detection and the deficit in the detection of rapid changes in the distribution of energy in the frequency spectrum (formant transitions). Studies examining these temporal processing deficit hypotheses do not employ measures that quantify the temporal dynamics of stimuli. It is shown that measures based on quantification of the dynamics of complex, interaction-dominant systems (Recurrence Quantification Analysis and the multifractal spectrum) enable QDA to classify the stimuli almost identically as observed in dyslexic and average reading participants. It seems unlikely that participants used any of the features that are traditionally associated with accounts of (impaired) speech perception. The nature of the variables quantifying the temporal dynamics of the speech stimuli imply that the classification of speech stimuli cannot be regarded as a linear aggregate of component processes that each parse the acoustic signal independent of one another, as is assumed by the ‘classical’ aetiologies of developmental dyslexia. It is suggested that the results imply that the differences in speech perception performance between average and dyslexic readers represent a scaled continuum rather than being caused by a specific deficient component. PMID:25834769

  7. Year-round spatiotemporal distribution of harbour porpoises within and around the Maryland wind energy area

    PubMed Central

    O’Brien, Michael; Lyubchich, Vyacheslav; Roberts, Jason J.; Halpin, Patrick N.; Rice, Aaron N.; Bailey, Helen

    2017-01-01

    Offshore windfarms provide renewable energy, but activities during the construction phase can affect marine mammals. To understand how the construction of an offshore windfarm in the Maryland Wind Energy Area (WEA) off Maryland, USA, might impact harbour porpoises (Phocoena phocoena), it is essential to determine their poorly understood year-round distribution. Although habitat-based models can help predict the occurrence of species in areas with limited or no sampling, they require validation to determine the accuracy of the predictions. Incorporating more than 18 months of harbour porpoise detection data from passive acoustic monitoring, generalized auto-regressive moving average and generalized additive models were used to investigate harbour porpoise occurrence within and around the Maryland WEA in relation to temporal and environmental variables. Acoustic detection metrics were compared to habitat-based density estimates derived from aerial and boat-based sightings to validate the model predictions. Harbour porpoises occurred significantly more frequently during January to May, and foraged significantly more often in the evenings to early mornings at sites within and outside the Maryland WEA. Harbour porpoise occurrence peaked at sea surface temperatures of 5°C and chlorophyll a concentrations of 4.5 to 7.4 mg m-3. The acoustic detections were significantly correlated with the predicted densities, except at the most inshore site. This study provides insight into previously unknown fine-scale spatial and temporal patterns in distribution of harbour porpoises offshore of Maryland. The results can be used to help inform future monitoring and mitigate the impacts of windfarm construction and other human activities. PMID:28467455

  8. Year-round spatiotemporal distribution of harbour porpoises within and around the Maryland wind energy area.

    PubMed

    Wingfield, Jessica E; O'Brien, Michael; Lyubchich, Vyacheslav; Roberts, Jason J; Halpin, Patrick N; Rice, Aaron N; Bailey, Helen

    2017-01-01

    Offshore windfarms provide renewable energy, but activities during the construction phase can affect marine mammals. To understand how the construction of an offshore windfarm in the Maryland Wind Energy Area (WEA) off Maryland, USA, might impact harbour porpoises (Phocoena phocoena), it is essential to determine their poorly understood year-round distribution. Although habitat-based models can help predict the occurrence of species in areas with limited or no sampling, they require validation to determine the accuracy of the predictions. Incorporating more than 18 months of harbour porpoise detection data from passive acoustic monitoring, generalized auto-regressive moving average and generalized additive models were used to investigate harbour porpoise occurrence within and around the Maryland WEA in relation to temporal and environmental variables. Acoustic detection metrics were compared to habitat-based density estimates derived from aerial and boat-based sightings to validate the model predictions. Harbour porpoises occurred significantly more frequently during January to May, and foraged significantly more often in the evenings to early mornings at sites within and outside the Maryland WEA. Harbour porpoise occurrence peaked at sea surface temperatures of 5°C and chlorophyll a concentrations of 4.5 to 7.4 mg m-3. The acoustic detections were significantly correlated with the predicted densities, except at the most inshore site. This study provides insight into previously unknown fine-scale spatial and temporal patterns in distribution of harbour porpoises offshore of Maryland. The results can be used to help inform future monitoring and mitigate the impacts of windfarm construction and other human activities.

  9. Evaluation of the acoustic coordinated reset (CR®) neuromodulation therapy for tinnitus: study protocol for a double-blind randomized placebo-controlled trial.

    PubMed

    Hoare, Derek J; Pierzycki, Robert H; Thomas, Holly; McAlpine, David; Hall, Deborah A

    2013-07-10

    Current theories of tinnitus assume that the phantom sound is generated either through increased spontaneous activity of neurons in the auditory brain, or through pathological temporal firing patterns of the spontaneous neuronal discharge, or a combination of both factors. With this in mind, Tass and colleagues recently tested a number of temporally patterned acoustic stimulation strategies in a proof of concept study. Potential therapeutic sound regimes were derived according to a paradigm assumed to disrupt hypersynchronous neuronal activity, and promote plasticity mechanisms that stabilize a state of asynchronous spontaneous activity. This would correspond to a permanent reduction of tinnitus. The proof of concept study, conducted in Germany, confirmed the safety of the acoustic stimuli for use in tinnitus, and exploratory results indicated modulation of tinnitus-related pathological synchronous activity with potential therapeutic benefit. The most effective stimulation paradigm is now in clinical use as a sound therapy device, the acoustic coordinated reset (CR®) neuromodulation (Adaptive Neuromodulation GmbH (ANM), Köln, Germany). To measure the efficacy of CR® neuromodulation, we devised a powered, two-center, randomized controlled trial (RCT) compliant with the reporting standards defined in the Consolidated Standards of Reporting Trials (CONSORT) Statement. The RCT design also addresses the recent call for international standards within the tinnitus community for high-quality clinical trials. The design uses a between-subjects comparison with minimized allocation of participants to treatment and placebo groups. A minimization approach was selected to ensure that the two groups are balanced with respect to age, gender, hearing, and baseline tinnitus severity. The protocol ensures double blinding, with crossover of the placebo group to receive the proprietary intervention after 12 weeks. The primary endpoints are the pre- and post-treatment measures that provide the primary measures of efficacy, namely a validated and sensitive questionnaire measure of the functional impact of tinnitus. The trial is also designed to capture secondary changes in tinnitus handicap, quality (pitch, loudness, bandwidth), and changes in tinnitus-related pathological synchronous brain activity using electroencephalography (EEG). This RCT was designed to provide a confident high-level estimate of the efficacy of sound therapy using CR® neuromodulation compared to a well-matched placebo intervention, and uniquely in terms of sound therapy, examine the physiological effects of the intervention against its putative mechanism of action. ClinicalTrials.gov, NCT01541969.

  10. Long-Term Monitoring of Dolphin Biosonar Activity in Deep Pelagic Waters of the Mediterranean Sea.

    PubMed

    Caruso, Francesco; Alonge, Giuseppe; Bellia, Giorgio; De Domenico, Emilio; Grammauta, Rosario; Larosa, Giuseppina; Mazzola, Salvatore; Riccobene, Giorgio; Pavan, Gianni; Papale, Elena; Pellegrino, Carmelo; Pulvirenti, Sara; Sciacca, Virginia; Simeone, Francesco; Speziale, Fabrizio; Viola, Salvatore; Buscaino, Giuseppa

    2017-06-28

    Dolphins emit short ultrasonic pulses (clicks) to acquire information about the surrounding environment, prey and habitat features. We investigated Delphinidae activity over multiple temporal scales through the detection of their echolocation clicks, using long-term Passive Acoustic Monitoring (PAM). The Istituto Nazionale di Fisica Nucleare operates multidisciplinary seafloor observatories in a deep area of the Central Mediterranean Sea. The Ocean noise Detection Experiment collected data offshore the Gulf of Catania from January 2005 to November 2006, allowing the study of temporal patterns of dolphin activity in this deep pelagic zone for the first time. Nearly 5,500 five-minute recordings acquired over two years were examined using spectrogram analysis and through development and testing of an automatic detection algorithm. Echolocation activity of dolphins was mostly confined to nighttime and crepuscular hours, in contrast with communicative signals (whistles). Seasonal variation, with a peak number of clicks in August, was also evident, but no effect of lunar cycle was observed. Temporal trends in echolocation corresponded to environmental and trophic variability known in the deep pelagic waters of the Ionian Sea. Long-term PAM and the continued development of automatic analysis techniques are essential to advancing the study of pelagic marine mammal distribution and behaviour patterns.

  11. Seasonal and geographical patterns of fin whale song in the western North Atlantic Ocean.

    PubMed

    Morano, Janelle L; Salisbury, Daniel P; Rice, Aaron N; Conklin, Karah L; Falk, Keri L; Clark, Christopher W

    2012-08-01

    Male fin whales, Balaenoptera physalus, produce a song consisting of 20 Hz notes at regularly spaced time intervals. Previous studies identified regional differences in fin whale internote intervals (INI), but seasonal changes within populations have not been closely examined. To understand the patterns of fin whale song in the western North Atlantic, the seasonal abundance and acoustic features of fin whale song are measured from two years of archival passive acoustic recordings at two representative locations: Massachusetts Bay and New York Bight. Fin whale 20 Hz notes are detected on 99% of recorded days. In both regions, INI varies significantly throughout the year as two distinct periods: a "short-INI" season in September-January (9.6 s) and a "long-INI" season in March-May (15.1 s). February and June-August are transitional-INI months, with higher variability. Note abundance decreases with increasing INI, where note abundance is significantly lower in April-August than in September-January. Short-INI and high note abundance correspond to the fin whale reproductive season. The temporal variability of INI may be a mechanism by which fin whale individuals encode and communicate a variety of behaviorally relevant information.

  12. Analysis of humpback whale sounds in shallow waters of the Southeastern Arabian Sea: An indication of breeding habitat.

    PubMed

    Mahanty, Madan M; Latha, G; Thirunavukkarasu, A

    2015-06-01

    The primary objective of this work was to present the acoustical identification of humpback whales, detected by using an autonomous ambient noise measurement system, deployed in the shallow waters of the Southeastern Arabian Sea (SEAS) during the period January to May 2011. Seven types of sounds were detected. These were characteristically upsweeps and downsweeps along with harmonics. Sounds produced repeatedly in a specific pattern were referred to as phrases (PQRS and ABC). Repeated phrases in a particular pattern were referred to as themes, and from the spectrographic analysis, two themes (I and II) were identified. The variation in the acoustic characteristics such as fundamental frequency, range, duration of the sound unit, and the structure of the phrases and themes are discussed. Sound units were recorded from mid-January to mid-March, with a peak in February, when the mean SST is approx. 28 degree C, and no presence was recorded after mid-March. The temporal and thematic structures strongly determine the functions of the humpback whale song form. Given the use of song in the SEAS, this area is possibly used as an active breeding habitat by humpback whales during the winter season.

  13. Temporal coherence of the acoustic field forward propagated through a continental shelf with random internal waves.

    PubMed

    Gong, Zheng; Chen, Tianrun; Ratilal, Purnima; Makris, Nicholas C

    2013-11-01

    An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to estimate the temporal coherence of the acoustic field forward propagated through a continental-shelf waveguide containing random three-dimensional internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continental-shelf waveguide is shown to decay with a power-law of range to the -1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to be consistent with measured acoustic coherence time scales. The model should provide a useful prediction of the acoustic coherence time scale as a function of internal wave energy in continental-shelf environments. The acoustic coherence time scale is an important parameter in remote sensing applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and (ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary averaging.

  14. Acoustic and Seismic Fields of Hydraulic Jumps at Varying Froude Numbers

    NASA Astrophysics Data System (ADS)

    Ronan, Timothy J.; Lees, Jonathan M.; Mikesell, T. Dylan; Anderson, Jacob F.; Johnson, Jeffrey B.

    2017-10-01

    Mechanisms that produce seismic and acoustic wavefields near rivers are poorly understood because of a lack of observations relating temporally dependent river conditions to the near-river seismoacoustic fields. This controlled study at the Harry W. Morrison Dam (HWMD) on the Boise River, Idaho, explores how temporal variation in fluvial systems affects surrounding acoustic and seismic fields. Adjusting the configuration of the HWMD changed the river bathymetry and therefore the form of the standing wave below the dam. The HWMD was adjusted to generate four distinct wave regimes that were parameterized through their dimensionless Froude numbers (Fr) and observations of the ambient seismic and acoustic wavefields at the study site. To generate detectable and coherent signals, a standing wave must exceed a threshold Fr value of 1.7, where a nonbreaking undular jump turns into a breaking weak hydraulic jump. Hydrodynamic processes may partially control the spectral content of the seismic and acoustic energies. Furthermore, spectra related to reproducible wave conditions can be used to calibrate and verify fluvial seismic and acoustic models.

  15. Comparison of cosmology and seabed acoustics measurements using statistical inference from maximum entropy

    NASA Astrophysics Data System (ADS)

    Knobles, David; Stotts, Steven; Sagers, Jason

    2012-03-01

    Why can one obtain from similar measurements a greater amount of information about cosmological parameters than seabed parameters in ocean waveguides? The cosmological measurements are in the form of a power spectrum constructed from spatial correlations of temperature fluctuations within the microwave background radiation. The seabed acoustic measurements are in the form of spatial correlations along the length of a spatial aperture. This study explores the above question from the perspective of posterior probability distributions obtained from maximizing a relative entropy functional. An answer is in part that the seabed in shallow ocean environments generally has large temporal and spatial inhomogeneities, whereas the early universe was a nearly homogeneous cosmological soup with small but important fluctuations. Acoustic propagation models used in shallow water acoustics generally do not capture spatial and temporal variability sufficiently well, which leads to model error dominating the statistical inference problem. This is not the case in cosmology. Further, the physics of the acoustic modes in cosmology is that of a standing wave with simple initial conditions, whereas for underwater acoustics it is a traveling wave in a strongly inhomogeneous bounded medium.

  16. Acoustic tweezers via sub-time-of-flight regime surface acoustic waves.

    PubMed

    Collins, David J; Devendran, Citsabehsan; Ma, Zhichao; Ng, Jia Wei; Neild, Adrian; Ai, Ye

    2016-07-01

    Micrometer-scale acoustic waves are highly useful for refined optomechanical and acoustofluidic manipulation, where these fields are spatially localized along the transducer aperture but not along the acoustic propagation direction. In the case of acoustic tweezers, such a conventional acoustic standing wave results in particle and cell patterning across the entire width of a microfluidic channel, preventing selective trapping. We demonstrate the use of nanosecond-scale pulsed surface acoustic waves (SAWs) with a pulse period that is less than the time of flight between opposing transducers to generate localized time-averaged patterning regions while using conventional electrode structures. These nodal positions can be readily and arbitrarily positioned in two dimensions and within the patterning region itself through the imposition of pulse delays, frequency modulation, and phase shifts. This straightforward concept adds new spatial dimensions to which acoustic fields can be localized in SAW applications in a manner analogous to optical tweezers, including spatially selective acoustic tweezers and optical waveguides.

  17. Short-Term Fidelity, Habitat Use and Vertical Movement Behavior of the Black Rockfish Sebastes schlegelii as Determined by Acoustic Telemetry

    PubMed Central

    Zhang, Yingqiu; Xu, Qiang; Alós, Josep; Liu, Hui; Xu, Qinzeng; Yang, Hongsheng

    2015-01-01

    The recent miniaturization of acoustic tracking devices has allowed fishery managers and scientists to collect spatial and temporal data for sustainable fishery management. The spatial and temporal dimensions of fish behavior (movement and/or vertical migrations) are particularly relevant for rockfishes (Sebastes spp.) because most rockfish species are long-lived and have high site fidelity, increasing their vulnerability to overexploitation. In this study, we describe the short-term (with a tracking period of up to 46 d) spatial behavior, as determined by acoustic tracking, of the black rockfish Sebastes schlegelii, a species subject to overexploitation in the Yellow Sea of China. The average residence index (the ratio of detected days to the total period from release to the last detection) in the study area was 0.92 ± 0.13, and most of the tagged fish were detected by only one region of the acoustic receiver array, suggesting relatively high site fidelity to the study area. Acoustic tracking also suggested that this species is more frequently detected during the day than at night in our study area. However, the diel detection periodicity (24 h) was only evident for certain periods of the tracking time, as revealed by a continuous wavelet transform. The habitat selection index of tagged S. schlegelii suggested that S. schlegelii preferred natural reefs, mixed sand/artificial reef bottoms and mixed bottoms of boulder, cobble, gravel and artificial reefs. The preference of this species for the artificial reefs that were recently deployed in the study area suggests that artificial seascapes may be effective management tools to attract individuals. The vertical movement of tagged S. schlegelii was mostly characterized by bottom dwelling behavior, and there was high individual variability in the vertical migration pattern. Our results have important implications for S. schlegelii catchability, the implementation of marine protected areas, and the identification of key species habitats, and our study provides novel information for future studies on the sustainability of this important marine resource in eastern China. PMID:26322604

  18. Distribution of an Acoustic Scattering Layer, Petermann Fjord, Northwest Greenland

    NASA Astrophysics Data System (ADS)

    Heffron, E.; Mayer, L. A.; Jakobsson, M.; Hogan, K.; Jerram, K.

    2017-12-01

    The Petermann 2015 Expedition was a comprehensive paleoceanographic and paleoclimatological study of the marine-terminating Petermann Glacier and its outlet system in Northwest Greenland carried out July-August 2015. The purpose was the reconstruction of glacial history and current glacial processes in Petermann Fjord to better understand the fate of the Petermann Glacier and its floating ice tongue that acts as a critical buttressing force to the outlet glacier draining about 4% of the Greenland Ice Sheet. Seafloor mapping was a critical component of the study and an EM122 multibeam sonar was utilized for this purpose; additionally, water column data were acquired with this sonar and an EK80 split-beam echosounder. During the expedition, the mapping team noted an acoustic scattering layer in the EK80 and EM122 water column data which was observed to change depth in a spatially consistent manner that appeared to be related to location. Initial onboard processing revealed what appears to be a strong spatial coherence in the layer distribution that corresponds to our understanding of the complex circulation pattern in the study area, including inflow of warmer Atlantic waters and outflow of subglacial waters. This initial processing was limited to observations at 46 discrete locations that corresponded to CTD stations, a very small subset of the 4800 line kilometers of data collected by each sonar. Both sonars were run 24 hours per day over the 30-day expedition, providing continuous time-varying acoustic coverage of the study area. Post-cruise additional data has been processed to extract the acoustic returns from the scattering layer using a combination of commercial sonar processing software and specialized MATLAB and Python routines. 3-D surfaces have been generated from the extracted points in order to visualize the continuous spatial and temporal distribution of the scattering layer across the entire study area. Multiple crossings of the same location at different times of day address the question of the temporal stability of the scattering layer while the detailed map of the spatial distribution demonstrates the relationship of the scattering layer to the water masses and implies that continuous acoustic coverage may be a powerful proxy for oceanography.

  19. Identification and Characteristics of Signature Whistles in Wild Bottlenose Dolphins (Tursiops truncatus) from Namibia

    PubMed Central

    Elwen, Simon Harvey; Nastasi, Aurora

    2014-01-01

    A signature whistle type is a learned, individually distinctive whistle type in a dolphin's acoustic repertoire that broadcasts the identity of the whistle owner. The acquisition and use of signature whistles indicates complex cognitive functioning that requires wider investigation in wild dolphin populations. Here we identify signature whistle types from a population of approximately 100 wild common bottlenose dolphins (Tursiops truncatus) inhabiting Walvis Bay, and describe signature whistle occurrence, acoustic parameters and temporal production. A catalogue of 43 repeatedly emitted whistle types (REWTs) was generated by analysing 79 hrs of acoustic recordings. From this, 28 signature whistle types were identified using a method based on the temporal patterns in whistle sequences. A visual classification task conducted by 5 naïve judges showed high levels of agreement in classification of whistles (Fleiss-Kappa statistic, κ = 0.848, Z = 55.3, P<0.001) and supported our categorisation. Signature whistle structure remained stable over time and location, with most types (82%) recorded in 2 or more years, and 4 identified at Walvis Bay and a second field site approximately 450 km away. Whistle acoustic parameters were consistent with those of signature whistles documented in Sarasota Bay (Florida, USA). We provide evidence of possible two-voice signature whistle production by a common bottlenose dolphin. Although signature whistle types have potential use as a marker for studying individual habitat use, we only identified approximately 28% of those from the Walvis Bay population, despite considerable recording effort. We found that signature whistle type diversity was higher in larger dolphin groups and groups with calves present. This is the first study describing signature whistles in a wild free-ranging T. truncatus population inhabiting African waters and it provides a baseline on which more in depth behavioural studies can be based. PMID:25203814

  20. Huygens-Fresnel Acoustic Interference and the Development of Robust Time-Averaged Patterns from Traveling Surface Acoustic Waves

    NASA Astrophysics Data System (ADS)

    Devendran, Citsabehsan; Collins, David J.; Ai, Ye; Neild, Adrian

    2017-04-01

    Periodic pattern generation using time-averaged acoustic forces conventionally requires the intersection of counterpropagating wave fields, where suspended micro-objects in a microfluidic system collect along force potential minimizing nodal or antinodal lines. Whereas this effect typically requires either multiple transducer elements or whole channel resonance, we report the generation of scalable periodic patterning positions without either of these conditions. A single propagating surface acoustic wave interacts with the proximal channel wall to produce a knife-edge effect according to the Huygens-Fresnel principle, where these cylindrically propagating waves interfere with classical wave fronts emanating from the substrate. We simulate these conditions and describe a model that accurately predicts the lateral spacing of these positions in a robust and novel approach to acoustic patterning.

  1. The perception of syllable affiliation of singleton stops in repetitive speech.

    PubMed

    de Jong, Kenneth J; Lim, Byung-Jin; Nagao, Kyoko

    2004-01-01

    Stetson (1951) noted that repeating singleton coda consonants at fast speech rates makes them be perceived as onset consonants affiliated with a following vowel. The current study documents the perception of rate-induced resyllabification, as well as what temporal properties give rise to the perception of syllable affiliation. Stimuli were extracted from a previous study of repeated stop + vowel and vowel + stop syllables (de Jong, 2001a, 2001b). Forced-choice identification tasks show that slow repetitions are clearly distinguished. As speakers increase rate, they reach a point after which listeners disagree as to the affiliation of the stop. This pattern is found for voiced and voiceless consonants using different stimulus extraction techniques. Acoustic models of the identifications indicate that the sudden shift in syllabification occurs with the loss of an acoustic hiatus between successive syllables. Acoustic models of the fast rate identifications indicate various other qualities, such as consonant voicing, affect the probability that the consonants will be perceived as onsets. These results indicate a model of syllabic affiliation where specific juncture-marking aspects of the signal dominate parsing, and in their absence other differences provide additional, weaker cues to syllabic affiliation.

  2. Indo-Pacific humpback dolphin occurrence north of Lantau Island, Hong Kong, based on year-round passive acoustic monitoring.

    PubMed

    Munger, Lisa; Lammers, Marc O; Cifuentes, Mattie; Würsig, Bernd; Jefferson, Thomas A; Hung, Samuel K

    2016-10-01

    Long-term passive acoustic monitoring (PAM) was conducted to study Indo-Pacific humpback dolphins, Sousa chinensis, as part of environmental impact assessments for several major coastal development projects in Hong Kong waters north of Lantau Island. Ecological acoustic recorders obtained 2711 days of recording at 13 sites from December 2012 to December 2014. Humpback dolphin sounds were manually detected on more than half of days with recordings at 12 sites, 8 of which were within proposed reclamation areas. Dolphin detection rates were greatest at Lung Kwu Chau, with other high-occurrence locations northeast of the Hong Kong International Airport and within the Lung Kwu Tan and Siu Ho Wan regions. Dolphin detection rates were greatest in summer and autumn (June-November) and were significantly reduced in spring (March-May) compared to other times of year. Click detection rates were significantly higher at night than during daylight hours. These findings suggest high use of many of the proposed reclamation/development areas by humpback dolphins, particularly at night, and demonstrate the value of long-term PAM for documenting spatial and temporal patterns in dolphin occurrence to help inform management decisions.

  3. Hysteresis of bedload transport during glaciermelting floods in a small Andean stream

    NASA Astrophysics Data System (ADS)

    Escauriaza, C. R.; Mao, L.; Carrillo, R.

    2015-12-01

    Quantifying bedload transport in mountain stream is of the highest importance for predicting morphodynamics and risks during flood events, and for planning river management practices. At the scale of single flood event, the relationship between water discharge and bedload transport rate often reveals hysteretic loops. When sediment transport peaks before water discharge the hysteresis is clockwise, and this has been related to unlimited sediment supply conditions such as loose sediments left by previous floods on the channel. On the contrary, counterclockwise hysteresis has also been observed and mainly related to limited sediment supply conditions, such as consolidated grains on the bed surface due to long low-flows periods. Understanding the direction and magnitude of hysteresis at the single flood event can thus reveal the sediment availability. Also, interpreting temporal trend of hysteresis could be used to infer the dynamics of sediment sources. This work is focused in the temporal trend of hysteresis pattern of bedload transport in a small (27 km2) glaciarized catchment in the Andes of central Chile (Estero Morales) during the ablation season from October 2014 to March 2015. Bedload was measured indirectly using a Japanese acoustic pipe sensor which detects the acoustic vibrations induced by particles hitting the device. A preliminary analysis of the collected data reveals that hysteresis of single floods due to snow- and glacier-melting index follow patterns according to the season. Clockwise hysteresis is typical in events occurring in late spring and early summer, while counterclockwise appears mostly in the summer season. The hysteresis index tends to decrease from spring to late summer, indicating a progressive shift from clockwise to counterclockwise loops. This pattern suggest that sediment availability decreases overtime probably due to the progressive exhaustion of sediments stored in the channel bed. This research is being developed within the framework of Project FONDECYT 1130378.

  4. Spatio-Temporal Evolution of Sound Speed Channels on the Chukchi Shelf

    NASA Astrophysics Data System (ADS)

    Eickmeier, J.; Badiey, M.; Wan, L.

    2017-12-01

    The physics of an acoustic waveguide are influenced by various boundary conditions as well as spatial and temporal fluctuations in temperature and salinity profiles the water column. The shallow water Canadian Basin Acoustic Propagation Experiment (CANAPE) experiment was designed to study the effect of oceanographic variability on the acoustic field. A pilot study was conducted in the summer of 2015, full deployment of acoustic and environmental moorings took place in 2016, and recovery will occur in late 2017. An example of strong oceanographic variability in the SW region is depicted in Figure 1. Over the course of 7 days, warm Bering Sea water arrived on the Chukchi Shelf and sank in the water column to between 25 m and 125 m depth. This warm water spread to a range of 10 km and a potential eddy of warm water formed causing an increase in sound speed between 15 km and 20 km range in Fig. 1(b). Due to the increased sound speed, a strong sound channel evolved between 100 m and 200 m for acoustic waves arriving from off the shelf, deep water sources. In Fig. 1(a), the initial formation of the acoustic channel is only evident in 50 m to 100 m of water out to a range of 5 km. Recorded environmental data will be used to study fluctuations in sound speed channel formation on the Chukchi Shelf. Data collected in 2015 and 2016 have shown sound duct evolution over 7 days and over a one-month period. Analysis is projected to show sound channel formation over a new range of spatio-temporal scales. This analysis will show a cycle of sound channels opening and closing on the shelf, where this cycle strongly influences the propagation path, range and attenuation of acoustic waves.

  5. Time series association learning

    DOEpatents

    Papcun, George J.

    1995-01-01

    An acoustic input is recognized from inferred articulatory movements output by a learned relationship between training acoustic waveforms and articulatory movements. The inferred movements are compared with template patterns prepared from training movements when the relationship was learned to regenerate an acoustic recognition. In a preferred embodiment, the acoustic articulatory relationships are learned by a neural network. Subsequent input acoustic patterns then generate the inferred articulatory movements for use with the templates. Articulatory movement data may be supplemented with characteristic acoustic information, e.g. relative power and high frequency data, to improve template recognition.

  6. Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing

    PubMed Central

    Doelling, Keith; Arnal, Luc; Ghitza, Oded; Poeppel, David

    2013-01-01

    A growing body of research suggests that intrinsic neuronal slow (< 10 Hz) oscillations in auditory cortex appear to track incoming speech and other spectro-temporally complex auditory signals. Within this framework, several recent studies have identified critical-band temporal envelopes as the specific acoustic feature being reflected by the phase of these oscillations. However, how this alignment between speech acoustics and neural oscillations might underpin intelligibility is unclear. Here we test the hypothesis that the ‘sharpness’ of temporal fluctuations in the critical band envelope acts as a temporal cue to speech syllabic rate, driving delta-theta rhythms to track the stimulus and facilitate intelligibility. We interpret our findings as evidence that sharp events in the stimulus cause cortical rhythms to re-align and parse the stimulus into syllable-sized chunks for further decoding. Using magnetoencephalographic recordings, we show that by removing temporal fluctuations that occur at the syllabic rate, envelope-tracking activity is reduced. By artificially reinstating these temporal fluctuations, envelope-tracking activity is regained. These changes in tracking correlate with intelligibility of the stimulus. Together, the results suggest that the sharpness of fluctuations in the stimulus, as reflected in the cochlear output, drive oscillatory activity to track and entrain to the stimulus, at its syllabic rate. This process likely facilitates parsing of the stimulus into meaningful chunks appropriate for subsequent decoding, enhancing perception and intelligibility. PMID:23791839

  7. Spatio-temporal segregation of calling behavior at a multispecies fish spawning site in Little Cayman

    NASA Astrophysics Data System (ADS)

    Cameron, K. C.; Sirovic, A.; Jaffe, J. S.; Semmens, B.; Pattengill-Semmens, C.; Gibb, J.

    2016-02-01

    Fish spawning aggregation (FSA) sites are extremely vulnerable to over-exploitation. Accurate understanding of the spatial and temporal use of such sites is necessary for effective species management. The size of FSAs can be on the order of kilometers and peak spawning often occurs at night, posing challenges to visual observation. Passive acoustics are an alternative method for dealing with these challenges. An array of passive acoustic recorders and GoPro cameras were deployed during Nassau grouper (Epinephelus striatus) spawning from February 7th to 12th, 2015 at a multispecies spawning aggregation site in Little Cayman, Cayman Islands. In addition to Nassau grouper, at least 10 other species are known to spawn at this location including tiger grouper (Mycteroperca tigris), red hind (Epinephelus guttatus), black grouper (Mycteroperca bonaci), and yellowfin grouper (Mycteroperca venenosa). During 5 days of continuous recordings, over 21,000 fish calls were detected. These calls were classified into 15 common types. Species identification and behavioral context of unknown common call types were determined by coupling video recordings collected during this time with call localizations. There are distinct temporal patterns in call production of different species. For example, red hind and yellowfin grouper call predominately at night with yellowfin call rates increasing after midnight, and black grouper call primarily during dusk and dawn. In addition, localization methods were used to reveal how the FSA area was divided among species. These findings facilitate a better understanding of the behavior of these important reef fish species allowing policymakers to more effectively manage and protect them.

  8. Two-dimensional single-cell patterning with one cell per well driven by surface acoustic waves

    PubMed Central

    Collins, David J.; Morahan, Belinda; Garcia-Bustos, Jose; Doerig, Christian; Plebanski, Magdalena; Neild, Adrian

    2015-01-01

    In single-cell analysis, cellular activity and parameters are assayed on an individual, rather than population-average basis. Essential to observing the activity of these cells over time is the ability to trap, pattern and retain them, for which previous single-cell-patterning work has principally made use of mechanical methods. While successful as a long-term cell-patterning strategy, these devices remain essentially single use. Here we introduce a new method for the patterning of multiple spatially separated single particles and cells using high-frequency acoustic fields with one cell per acoustic well. We characterize and demonstrate patterning for both a range of particle sizes and the capture and patterning of cells, including human lymphocytes and red blood cells infected by the malarial parasite Plasmodium falciparum. This ability is made possible by a hitherto unexplored regime where the acoustic wavelength is on the same order as the cell dimensions. PMID:26522429

  9. Noise sensitivity and loudness derivative index for urban road traffic noise annoyance computation.

    PubMed

    Gille, Laure-Anne; Marquis-Favre, Catherine; Weber, Reinhard

    2016-12-01

    Urban road traffic composed of powered-two-wheelers (PTWs), buses, heavy, and light vehicles is a major source of noise annoyance. In order to assess annoyance models considering different acoustical and non-acoustical factors, a laboratory experiment on short-term annoyance due to urban road traffic noise was conducted. At the end of the experiment, participants were asked to rate their noise sensitivity and to describe the noise sequences they heard. This verbalization task highlights that annoyance ratings are highly influenced by the presence of PTWs and by different acoustical features: noise intensity, irregular temporal amplitude variation, regular amplitude modulation, and spectral content. These features, except irregular temporal amplitude variation, are satisfactorily characterized by the loudness, the total energy of tonal components and the sputtering and nasal indices. Introduction of the temporal derivative of loudness allows successful modeling of perceived amplitude variations. Its contribution to the tested annoyance models is high and seems to be higher than the contribution of mean loudness index. A multilevel regression is performed to assess annoyance models using selected acoustical indices and noise sensitivity. Three models are found to be promising for further studies that aim to enhance current annoyance models.

  10. Spatial hearing benefits demonstrated with presentation of acoustic temporal fine structure cues in bilateral cochlear implant listeners.

    PubMed

    Churchill, Tyler H; Kan, Alan; Goupell, Matthew J; Litovsky, Ruth Y

    2014-09-01

    Most contemporary cochlear implant (CI) processing strategies discard acoustic temporal fine structure (TFS) information, and this may contribute to the observed deficits in bilateral CI listeners' ability to localize sounds when compared to normal hearing listeners. Additionally, for best speech envelope representation, most contemporary speech processing strategies use high-rate carriers (≥900 Hz) that exceed the limit for interaural pulse timing to provide useful binaural information. Many bilateral CI listeners are sensitive to interaural time differences (ITDs) in low-rate (<300 Hz) constant-amplitude pulse trains. This study explored the trade-off between superior speech temporal envelope representation with high-rate carriers and binaural pulse timing sensitivity with low-rate carriers. The effects of carrier pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition in quiet were examined in eight bilateral CI listeners. Stimuli consisted of speech tokens processed at different electrical stimulation rates, and pulse timings that either preserved or did not preserve acoustic TFS cues. Results showed that CI listeners were able to use low-rate pulse timing cues derived from acoustic TFS when presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli.

  11. Temporal and spatial patterns of habitat use by juveniles of a small coastal shark (Mustelus lenticulatus) in an estuarine nursery.

    PubMed

    Francis, Malcolm P

    2013-01-01

    Juvenile rig (Mustelus lenticulatus) were internally tagged with acoustic transmitters and tracked with acoustic receivers deployed throughout two arms of Porirua Harbour, a small (7 km(2)) estuary in New Zealand. Ten rig were tracked for up to four months during summer-autumn to determine their spatial and temporal use of the habitat. The overall goal was to estimate the size of Marine Protected Areas required to protect rig nursery areas from direct human impacts. Rig showed clear site preferences, but those preferences varied among rig and over time. They spent most of their time in large basins and on shallow sand and mud flats around the margins, and avoided deep channels. Habitat range increased during autumn for many of the rig. Only one shark spent time in both harbour arms, indicating that there was little movement between the two. Rig home ranges were 2-7 km(2), suggesting that an effective MPA would need to cover the entire Porirua Harbour. They moved to outer harbour sites following some high river flow rates, and most left the harbour permanently during or soon after a river spike, suggesting that they were avoiding low salinity water. Rig showed strong diel movements during summer, although the diel pattern weakened in autumn. Persistent use of the same day and night sites indicates that diel movements are directed rather than random. Further research is required to determine the sizes of rig home ranges in larger harbours where nursery habitat is more extensive. Marine Protected Areas do not control land-based impacts such as accelerated sedimentation and heavy metal pollution, so integration of marine and terrestrial management tools across a range of government agencies is essential to fully protect nursery areas.

  12. Temporal and Spatial Patterns of Habitat Use by Juveniles of a Small Coastal Shark (Mustelus lenticulatus) in an Estuarine Nursery

    PubMed Central

    Francis, Malcolm P.

    2013-01-01

    Juvenile rig (Mustelus lenticulatus) were internally tagged with acoustic transmitters and tracked with acoustic receivers deployed throughout two arms of Porirua Harbour, a small (7 km2) estuary in New Zealand. Ten rig were tracked for up to four months during summer–autumn to determine their spatial and temporal use of the habitat. The overall goal was to estimate the size of Marine Protected Areas required to protect rig nursery areas from direct human impacts. Rig showed clear site preferences, but those preferences varied among rig and over time. They spent most of their time in large basins and on shallow sand and mud flats around the margins, and avoided deep channels. Habitat range increased during autumn for many of the rig. Only one shark spent time in both harbour arms, indicating that there was little movement between the two. Rig home ranges were 2–7 km2, suggesting that an effective MPA would need to cover the entire Porirua Harbour. They moved to outer harbour sites following some high river flow rates, and most left the harbour permanently during or soon after a river spike, suggesting that they were avoiding low salinity water. Rig showed strong diel movements during summer, although the diel pattern weakened in autumn. Persistent use of the same day and night sites indicates that diel movements are directed rather than random. Further research is required to determine the sizes of rig home ranges in larger harbours where nursery habitat is more extensive. Marine Protected Areas do not control land-based impacts such as accelerated sedimentation and heavy metal pollution, so integration of marine and terrestrial management tools across a range of government agencies is essential to fully protect nursery areas. PMID:23437298

  13. Localized sources of propagating acoustic waves in the solar photosphere

    NASA Technical Reports Server (NTRS)

    Brown, Timothy M.; Bogdan, Thomas J.; Lites, Bruce W.; Thomas, John H.

    1992-01-01

    A time series of Doppler measurements of the solar photosphere with moderate spatial resolution is described which covers a portion of the solar disk surrounding a small sunspot group. At temporal frequencies above 5.5 mHz, the Doppler field probes the spatial and temporal distribution of regions that emit acoustic energy. In the frequency range between 5.5 and 7.5 mHz, inclusive, a small fraction of the surface area emits a disproportionate amount of acoustic energy. The regions with excess emission are characterized by a patchy structure at spatial scales of a few arcseconds and by association (but not exact co-location) with regions having substantial magnetic field strength. These observations bear on the conjecture that most of the acoustic energy driving solar p-modes is created in localized regions occupying a small fraction of the solar surface area.

  14. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  15. Material fabrication using acoustic radiation forces

    DOEpatents

    Sinha, Naveen N.; Sinha, Dipen N.; Goddard, Gregory Russ

    2015-12-01

    Apparatus and methods for using acoustic radiation forces to order particles suspended in a host liquid are described. The particles may range in size from nanometers to millimeters, and may have any shape. The suspension is placed in an acoustic resonator cavity, and acoustical energy is supplied thereto using acoustic transducers. The resulting pattern may be fixed by using a solidifiable host liquid, forming thereby a solid material. Patterns may be quickly generated; typical times ranging from a few seconds to a few minutes. In a one-dimensional arrangement, parallel layers of particles are formed. With two and three dimensional transducer arrangements, more complex particle configurations are possible since different standing-wave patterns may be generated in the resonator. Fabrication of periodic structures, such as metamaterials, having periods tunable by varying the frequency of the acoustic waves, on surfaces or in bulk volume using acoustic radiation forces, provides great flexibility in the creation of new materials. Periodicities may range from millimeters to sub-micron distances, covering a large portion of the range for optical and acoustical metamaterials.

  16. The Curious Acoustic Behavior of Estuarine Snapping Shrimp: Temporal Patterns of Snapping Shrimp Sound in Sub-Tidal Oyster Reef Habitat.

    PubMed

    Bohnenstiehl, DelWayne R; Lillis, Ashlee; Eggleston, David B

    2016-01-01

    Ocean soundscapes convey important sensory information to marine life. Like many mid-to-low latitude coastal areas worldwide, the high-frequency (>1.5 kHz) soundscape of oyster reef habitat within the West Bay Marine Reserve (36°N, 76°W) is dominated by the impulsive, short-duration signals generated by snapping shrimp. Between June 2011 and July 2012, a single hydrophone deployed within West Bay was programmed to record 60 or 30 seconds of acoustic data every 15 or 30 minutes. Envelope correlation and amplitude information were then used to count shrimp snaps within these recordings. The observed snap rates vary from 1500-2000 snaps per minute during summer to <100 snaps per minute during winter. Sound pressure levels are positively correlated with snap rate (r = 0.71-0.92) and vary seasonally by ~15 decibels in the 1.5-20 kHz range. Snap rates are positively correlated with water temperatures (r = 0.81-0.93), as well as potentially influenced by climate-driven changes in water quality. Light availability modulates snap rate on diurnal time scales, with most days exhibiting a significant preference for either nighttime or daytime snapping, and many showing additional crepuscular increases. During mid-summer, the number of snaps occurring at night is 5-10% more than predicted by a random model; however, this pattern is reversed between August and April, with an excess of up to 25% more snaps recorded during the day in the mid-winter. Diurnal variability in sound pressure levels is largest in the mid-winter, when the overall rate of snapping is at its lowest, and the percentage difference between daytime and nighttime activity is at its highest. This work highlights our lack of knowledge regarding the ecology and acoustic behavior of one of the most dominant soniforous invertebrate species in coastal systems. It also underscores the necessity of long-duration, high-temporal-resolution sampling in efforts to understand the bioacoustics of animal behaviors and associated changes within the marine soundscape.

  17. The Curious Acoustic Behavior of Estuarine Snapping Shrimp: Temporal Patterns of Snapping Shrimp Sound in Sub-Tidal Oyster Reef Habitat

    PubMed Central

    Bohnenstiehl, DelWayne R.; Lillis, Ashlee; Eggleston, David B.

    2016-01-01

    Ocean soundscapes convey important sensory information to marine life. Like many mid-to-low latitude coastal areas worldwide, the high-frequency (>1.5 kHz) soundscape of oyster reef habitat within the West Bay Marine Reserve (36°N, 76°W) is dominated by the impulsive, short-duration signals generated by snapping shrimp. Between June 2011 and July 2012, a single hydrophone deployed within West Bay was programmed to record 60 or 30 seconds of acoustic data every 15 or 30 minutes. Envelope correlation and amplitude information were then used to count shrimp snaps within these recordings. The observed snap rates vary from 1500–2000 snaps per minute during summer to <100 snaps per minute during winter. Sound pressure levels are positively correlated with snap rate (r = 0.71–0.92) and vary seasonally by ~15 decibels in the 1.5–20 kHz range. Snap rates are positively correlated with water temperatures (r = 0.81–0.93), as well as potentially influenced by climate-driven changes in water quality. Light availability modulates snap rate on diurnal time scales, with most days exhibiting a significant preference for either nighttime or daytime snapping, and many showing additional crepuscular increases. During mid-summer, the number of snaps occurring at night is 5–10% more than predicted by a random model; however, this pattern is reversed between August and April, with an excess of up to 25% more snaps recorded during the day in the mid-winter. Diurnal variability in sound pressure levels is largest in the mid-winter, when the overall rate of snapping is at its lowest, and the percentage difference between daytime and nighttime activity is at its highest. This work highlights our lack of knowledge regarding the ecology and acoustic behavior of one of the most dominant soniforous invertebrate species in coastal systems. It also underscores the necessity of long-duration, high-temporal-resolution sampling in efforts to understand the bioacoustics of animal behaviors and associated changes within the marine soundscape. PMID:26761645

  18. Fatigue level estimation of monetary bills based on frequency band acoustic signals with feature selection by supervised SOM

    NASA Astrophysics Data System (ADS)

    Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa

    Fatigued monetary bills adversely affect the daily operation of automated teller machines (ATMs). In order to make the classification of fatigued bills more efficient, the development of an automatic fatigued monetary bill classification method is desirable. We propose a new method by which to estimate the fatigue level of monetary bills from the feature-selected frequency band acoustic energy pattern of banking machines. By using a supervised self-organizing map (SOM), we effectively estimate the fatigue level using only the feature-selected frequency band acoustic energy pattern. Furthermore, the feature-selected frequency band acoustic energy pattern improves the estimation accuracy of the fatigue level of monetary bills by adding frequency domain information to the acoustic energy pattern. The experimental results with real monetary bill samples reveal the effectiveness of the proposed method.

  19. Acoustic holograms of active regions

    NASA Astrophysics Data System (ADS)

    Chou, Dean-Yi

    2008-10-01

    We propose a method to study solar magnetic regions in the solar interior with the principle of optical holography. A magnetic region in the solar interior scatters the solar background acoustic waves. The scattered waves and background waves could form an interference pattern on the solar surface. We investigate the feasibility of detecting this interference pattern on the solar surface, and using it to construct the three-dimensional scattered wave from the magnetic region with the principle of optical holography. In solar acoustic holography, the background acoustic waves play the role of reference wave; the magnetic region plays the role of the target object; the interference pattern, acoustic power map, on the solar surface plays the role of the hologram.

  20. Neural correlates of hemispheric dominance and ipsilaterality within the vestibular system.

    PubMed

    Janzen, J; Schlindwein, P; Bense, S; Bauermann, T; Vucurevic, G; Stoeter, P; Dieterich, M

    2008-10-01

    Earlier functional imaging studies on the processing of vestibular information mainly focused on cortical activations due to stimulation of the horizontal semicircular canals in right-handers. Two factors were found to determine its processing in the temporo-parietal cortex: a dominance of the non-dominant hemisphere and an ipsilaterality of the neural pathways. In an investigation of the role of these factors in the vestibular otoliths, we used vestibular evoked myogenic potentials (VEMPs) in a fMRI study of monaural saccular-otolith stimulation. Our aim was to (1) analyze the hemispheric dominance for saccular-otolith information in healthy left-handers, (2) determine if there is a predominance of the ipsilateral saccular-otolith projection, and (3) evaluate the impact of both factors on the temporo-parieto-insular activation pattern. A block design with three stimulation and rest conditions was applied: (1) 102 dB-VEMP stimulation; (2) 65 dB-control-acoustic stimulation, (3) 102 dB-white-noise-control stimulation. After subtraction of acoustic side effects, bilateral activations were found in the posterior insula, the superior/middle/transverse temporal gyri, and the inferior parietal lobule. The distribution of the saccular-otolith activations was influenced by the two factors but with topographic disparity: whereas the inferior parts of the temporo-parietal cortex were mainly influenced by the ipsilaterality of the pathways, the upper parts reflected the dominance of the non-dominant hemisphere. This is in contrast to the processing of acoustic stimulation, which showed a predominance of the contralateral pathways. Our study proves the importance of the hemispheric preponderance also in left-handers, which is of relevance in the superior parts of the insula gyrus V, the inferior parietal lobule, and the superior temporal gyri.

  1. Use of large-scale acoustic monitoring to assess anthropogenic pressures on Orthoptera communities.

    PubMed

    Penone, Caterina; Le Viol, Isabelle; Pellissier, Vincent; Julien, Jean-François; Bas, Yves; Kerbiriou, Christian

    2013-10-01

    Biodiversity monitoring at large spatial and temporal scales is greatly needed in the context of global changes. Although insects are a species-rich group and are important for ecosystem functioning, they have been largely neglected in conservation studies and policies, mainly due to technical and methodological constraints. Sound detection, a nondestructive method, is easily applied within a citizen-science framework and could be an interesting solution for insect monitoring. However, it has not yet been tested at a large scale. We assessed the value of a citizen-science program in which Orthoptera species (Tettigoniidae) were monitored acoustically along roads. We used Bayesian model-averaging analyses to test whether we could detect widely known patterns of anthropogenic effects on insects, such as the negative effects of urbanization or intensive agriculture on Orthoptera populations and communities. We also examined site-abundance correlations between years and estimated the biases in species detection to evaluate and improve the protocol. Urbanization and intensive agricultural landscapes negatively affected Orthoptera species richness, diversity, and abundance. This finding is consistent with results of previous studies of Orthoptera, vertebrates, carabids, and butterflies. The average mass of communities decreased as urbanization increased. The dispersal ability of communities increased as the percentage of agricultural land and, to a lesser extent, urban area increased. Despite changes in abundances over time, we found significant correlations between yearly abundances. We identified biases linked to the protocol (e.g., car speed or temperature) that can be accounted for ease in analyses. We argue that acoustic monitoring of Orthoptera along roads offers several advantages for assessing Orthoptera biodiversity at large spatial and temporal extents, particularly in a citizen science framework. © 2013 Society for Conservation Biology.

  2. Physical modeling of the formation and evolution of seismically active fault zones

    USGS Publications Warehouse

    Ponomarev, A.V.; Zavyalov, A.D.; Smirnov, V.B.; Lockner, D.A.

    1997-01-01

    Acoustic emission (AE) in rocks is studied as a model of natural seismicity. A special technique for rock loading has been used to help study the processes that control the development of AE during brittle deformation. This technique allows us to extend to hours fault growth which would normally occur very rapidly. In this way, the period of most intense interaction of acoustic events can be studied in detail. Characteristics of the acoustic regime (AR) include the Gutenberg-Richter b-value, spatial distribution of hypocenters with characteristic fractal (correlation) dimension d, Hurst exponent H, and crack concentration parameter Pc. The fractal structure of AR changes with the onset of the drop in differential stress during sample deformation. The change results from the active interaction of microcracks. This transition of the spatial distribution of AE hypocenters is accompanied by a corresponding change in the temporal correlation of events and in the distribution of event amplitudes as signified by a decrease of b-value. The characteristic structure that develops in the low-energy background AE is similar to the sequence of the strongest microfracture events. When the AR fractal structure develops, the variations of d and b are synchronous and d = 3b. This relation which occurs once the fractal structure is formed only holds for average values of d and b. Time variations of d and b are anticorrelated. The degree of temporal correlation of AR has time variations that are similar to d and b variations. The observed variations in laboratory AE experiments are compared with natural seismicity parameters. The close correspondence between laboratory-scale observations and naturally occurring seismicity suggests a possible new approach for understanding the evolution of complex seismicity patterns in nature. ?? 1997 Elsevier Science B.V. All rights reserved.

  3. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system.

    PubMed

    Dicke, Ulrike; Ewert, Stephan D; Dau, Torsten; Kollmeier, Birger

    2007-01-01

    Periodic amplitude modulations (AMs) of an acoustic stimulus are presumed to be encoded in temporal activity patterns of neurons in the cochlear nucleus. Physiological recordings indicate that this temporal AM code is transformed into a rate-based periodicity code along the ascending auditory pathway. The present study suggests a neural circuit for the transformation from the temporal to the rate-based code. Due to the neural connectivity of the circuit, bandpass shaped rate modulation transfer functions are obtained that correspond to recorded functions of inferior colliculus (IC) neurons. In contrast to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies (BMFs) of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units. In order to investigate the compatibility of the neural circuit with a linear modulation filterbank analysis as proposed in psychophysical studies, complex stimuli such as tones modulated by the sum of two sinusoids, narrowband noise, and iterated rippled noise were processed by the model. The model accounts for the encoding of AM depth over a large dynamic range and for modulation frequency selective processing of complex sounds.

  4. Temporal variability in sung productions of adolescents who stutter.

    PubMed

    Falk, Simone; Maslow, Elena; Thum, Georg; Hoole, Philip

    2016-01-01

    Singing has long been used as a technique to enhance and reeducate temporal aspects of articulation in speech disorders. In the present study, differences in temporal structure of sung versus spoken speech were investigated in stuttering. In particular, the question was examined if singing helps to reduce VOT variability of voiceless plosives, which would indicate enhanced temporal coordination of oral and laryngeal processes. Eight German adolescents who stutter and eight typically fluent peers repeatedly spoke and sang a simple German congratulation formula in which a disyllabic target word (e.g., /'ki:ta/) was repeated five times. Every trial, the first syllable of the word was varied starting equally often with one of the three voiceless German stops /p/, /t/, /k/. Acoustic analyses showed that mean VOT and stop gap duration reduced during singing compared to speaking while mean vowel and utterance duration was prolonged in singing in both groups. Importantly, adolescents who stutter significantly reduced VOT variability (measured as the Coefficient of Variation) during sung productions compared to speaking in word-initial stressed positions while the control group showed a slight increase in VOT variability. However, in unstressed syllables, VOT variability increased in both adolescents who do and do not stutter from speech to song. In addition, vowel and utterance durational variability decreased in both groups, yet, adolescents who stutter were still more variable in utterance duration independent of the form of vocalization. These findings shed new light on how singing alters temporal structure and in particular, the coordination of laryngeal-oral timing in stuttering. Future perspectives for investigating how rhythmic aspects could aid the management of fluent speech in stuttering are discussed. Readers will be able to describe (1) current perspectives on singing and its effects on articulation and fluency in stuttering and (2) acoustic parameters such as VOT variability which indicate the efficiency of control and coordination of laryngeal-oral movements. They will understand and be able to discuss (3) how singing reduces temporal variability in the productions of adolescents who do and do not stutter and 4) how this is linked to altered articulatory patterns in singing as well as to its rhythmic structure. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Patterning and manipulating microparticles into a three-dimensional matrix using standing surface acoustic waves

    NASA Astrophysics Data System (ADS)

    Nguyen, T. D.; Tran, V. T.; Fu, Y. Q.; Du, H.

    2018-05-01

    A method based on standing surface acoustic waves (SSAWs) is proposed to pattern and manipulate microparticles into a three-dimensional (3D) matrix inside a microchamber. An optical prism is used to observe the 3D alignment and patterning of the microparticles in the vertical and horizontal planes simultaneously. The acoustic radiation force effectively patterns the microparticles into lines of 3D space or crystal-lattice-like matrix patterns. A microparticle can be positioned precisely at a specified vertical location by balancing the forces of acoustic radiation, drag, buoyancy, and gravity acting on the microparticle. Experiments and finite-element numerical simulations both show that the acoustic radiation force increases gradually from the bottom of the chamber to the top, and microparticles can be moved up or down simply by adjusting the applied SSAW power. Our method has great potential for acoustofluidic applications, building the large-scale structures associated with biological objects and artificial neuron networks.

  6. Short-term hydrophysical and biological variability over the northeastern Black Sea continental slope as inferred from multiparametric tethered profiler surveys

    NASA Astrophysics Data System (ADS)

    Ostrovskii, Alexander; Zatsepin, Andrey

    2011-06-01

    This presentation introduces a new ocean autonomous profiler for multiparametric surveys at fixed geographical locations. The profiler moves down and up along a mooring line, which is taut vertically between a subsurface flotation and an anchor. This observational platform carries such modern oceanographic equipment as the Nortek Aquadopp-3D current meter and the Teledyne RDI Citadel CTD-ES probe. The profiler was successfully tested in the northeastern Black Sea during 2007-2009. By using the profiler, new data on the layered organization of the marine environment in the waters over the upper part of the continental slope were obtained. The temporal variability of the fine-scale structure of the acoustic backscatter at 2 MHz was interpreted along with biooptical and chemical data. The patchy patterns of the acoustic backscatter were associated with physical and biological processes such as the advection, propagation of submesoscale eddy, thermocline displacement, and diel migration of zooplankton. Further applications of the multidisciplinary moored profiler technology are discussed.

  7. Emergent selectivity for task-relevant stimuli in higher-order auditory cortex

    PubMed Central

    Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.

    2014-01-01

    A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467

  8. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data.

    PubMed

    Gow, David W; Olson, Bruna B

    2015-07-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.

  9. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data

    PubMed Central

    Gow, David W.; Olson, Bruna B.

    2015-01-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical “gang effects” in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account. PMID:25883413

  10. Technical Note: Detection of gas bubble leakage via correlation of water column multibeam images

    NASA Astrophysics Data System (ADS)

    Schneider von Deimling, J.; Papenberg, C.

    2012-03-01

    Hydroacoustic detection of natural gas release from the seafloor has been conducted in the past by using singlebeam echosounders. In contrast, modern multibeam swath mapping systems allow much wider coverage, higher resolution, and offer 3-D spatial correlation. Up to the present, the extremely high data rate hampers water column backscatter investigations and more sophisticated visualization and processing techniques are needed. Here, we present water column backscatter data acquired with a 50 kHz prototype multibeam system over a period of 75 seconds. Display types are of swath-images as well as of a "re-sorted" singlebeam presentation. Thus, individual and/or groups of gas bubbles rising from the 24 m deep seafloor clearly emerge in the acoustic images, making it possible to estimate rise velocities. A sophisticated processing scheme is introduced to identify those rising gas bubbles in the hydroacoustic data. We apply a cross-correlation technique adapted from particle imaging velocimetry (PIV) to the acoustic backscatter images. Temporal and spatial drift patterns of the bubbles are assessed and are shown to match very well to measured and theoretical rise patterns. The application of this processing to our field data gives clear results with respect to unambiguous bubble detection and remote bubble rise velocimetry. The method can identify and exclude the main source of misinterpretations, i.e. fish-mediated echoes. Although image-based cross-correlation techniques are well known in the field of fluid mechanics for high resolution and non-inversive current flow field analysis, we present the first application of this technique as an acoustic bubble detector.

  11. Nanoscale diffractive probing of strain dynamics in ultrafast transmission electron microscopy

    PubMed Central

    Feist, Armin; Rubiano da Silva, Nara; Liang, Wenxi; Ropers, Claus; Schäfer, Sascha

    2018-01-01

    The control of optically driven high-frequency strain waves in nanostructured systems is an essential ingredient for the further development of nanophononics. However, broadly applicable experimental means to quantitatively map such structural distortion on their intrinsic ultrafast time and nanometer length scales are still lacking. Here, we introduce ultrafast convergent beam electron diffraction with a nanoscale probe beam for the quantitative retrieval of the time-dependent local deformation gradient tensor. We demonstrate its capabilities by investigating the ultrafast acoustic deformations close to the edge of a single-crystalline graphite membrane. Tracking the structural distortion with a 28-nm/700-fs spatio-temporal resolution, we observe an acoustic membrane breathing mode with spatially modulated amplitude, governed by the optical near field structure at the membrane edge. Furthermore, an in-plane polarized acoustic shock wave is launched at the membrane edge, which triggers secondary acoustic shear waves with a pronounced spatio-temporal dependency. The experimental findings are compared to numerical acoustic wave simulations in the continuous medium limit, highlighting the importance of microscopic dissipation mechanisms and ballistic transport channels. PMID:29464187

  12. Nanoscale diffractive probing of strain dynamics in ultrafast transmission electron microscopy.

    PubMed

    Feist, Armin; Rubiano da Silva, Nara; Liang, Wenxi; Ropers, Claus; Schäfer, Sascha

    2018-01-01

    The control of optically driven high-frequency strain waves in nanostructured systems is an essential ingredient for the further development of nanophononics. However, broadly applicable experimental means to quantitatively map such structural distortion on their intrinsic ultrafast time and nanometer length scales are still lacking. Here, we introduce ultrafast convergent beam electron diffraction with a nanoscale probe beam for the quantitative retrieval of the time-dependent local deformation gradient tensor. We demonstrate its capabilities by investigating the ultrafast acoustic deformations close to the edge of a single-crystalline graphite membrane. Tracking the structural distortion with a 28-nm/700-fs spatio-temporal resolution, we observe an acoustic membrane breathing mode with spatially modulated amplitude, governed by the optical near field structure at the membrane edge. Furthermore, an in-plane polarized acoustic shock wave is launched at the membrane edge, which triggers secondary acoustic shear waves with a pronounced spatio-temporal dependency. The experimental findings are compared to numerical acoustic wave simulations in the continuous medium limit, highlighting the importance of microscopic dissipation mechanisms and ballistic transport channels.

  13. Movement patterns of silvertip sharks ( Carcharhinus albimarginatus) on coral reefs

    NASA Astrophysics Data System (ADS)

    Espinoza, Mario; Heupel, Michelle. R.; Tobin, Andrew J.; Simpfendorfer, Colin A.

    2015-09-01

    Understanding how sharks use coral reefs is essential for assessing risk of exposure to fisheries, habitat loss, and climate change. Despite a wide Indo-Pacific distribution, little is known about the spatial ecology of silvertip sharks ( Carcharhinus albimarginatus), compromising the ability to effectively manage their populations. We examined the residency and movements of silvertip sharks in the central Great Barrier Reef (GBR). An array of 56 VR2W acoustic receivers was used to monitor shark movements on 17 semi-isolated reefs. Twenty-seven individuals tagged with acoustic transmitters were monitored from 70 to 731 d. Residency index to the study site ranged from 0.05 to 0.97, with a mean residency (±SD) of 0.57 ± 0.26, but most individuals were detected at or near their tagging reef. Clear seasonal patterns were apparent, with fewer individuals detected between September and February. A large proportion of the tagged population (>71 %) moved regularly between reefs. Silvertip sharks were detected less during daytime and exhibited a strong diel pattern in depth use, which may be a strategy for optimizing energetic budgets and foraging opportunities. This study provides the first detailed examination of the spatial ecology and behavior of silvertip sharks on coral reefs. Silvertip sharks remained resident at coral reef habitats over long periods, but our results also suggest this species may have more complex movement patterns and use larger areas of the GBR than common reef shark species. Our findings highlight the need to further understand the movement ecology of silvertip sharks at different spatial and temporal scales, which is critical for developing effective management approaches.

  14. Seasonal and ontogenetic changes in movement patterns of sixgill sharks.

    PubMed

    Andrews, Kelly S; Williams, Greg D; Levin, Phillip S

    2010-09-08

    Understanding movement patterns is fundamental to population and conservation biology. The way an animal moves through its environment influences the dynamics of local populations and will determine how susceptible it is to natural or anthropogenic perturbations. It is of particular interest to understand the patterns of movement for species which are susceptible to human activities (e.g. fishing), or that exert a large influence on community structure, such as sharks. We monitored the patterns of movement of 34 sixgill sharks Hexanchus griseus using two large-scale acoustic arrays inside and outside Puget Sound, Washington, USA. Sixgill sharks were residents in Puget Sound for up to at least four years before making large movements out of the estuary. Within Puget Sound, sixgills inhabited sites for several weeks at a time and returned to the same sites annually. Across four years, sixgills had consistent seasonal movements in which they moved to the north from winter to spring and moved to the south from summer to fall. Just prior to leaving Puget Sound, sixgills altered their behavior and moved twice as fast among sites. Nineteen of the thirty-four sixgills were detected leaving Puget Sound for the outer coast. Three of these sharks returned to Puget Sound. For most large marine predators, we have a limited understanding of how they move through their environment, and this clouds our ability to successfully manage their populations and their communities. With detailed movement information, such as that being uncovered with acoustic monitoring, we can begin to quantify the spatial and temporal impacts of large predators within the framework of their ecosystems.

  15. Dependence of the Startle Response on Temporal and Spectral Characteristics of Acoustic Modulatory Influences in Rats and Gerbils

    PubMed Central

    Steube, Natalie; Nowotny, Manuela; Pilz, Peter K. D.; Gaese, Bernhard H.

    2016-01-01

    The acoustic startle response (ASR) and its modulation by non-startling prepulses, presented shortly before the startle-eliciting stimulus, is a broadly applied test paradigm to determine changes in neural processing related to auditory or psychiatric disorders. Modulation by a gap in background noise as a prepulse is especially used for tinnitus assessment. However, the timing and frequency-related aspects of prepulses are not fully understood. The present study aims to investigate temporal and spectral characteristics of acoustic stimuli that modulate the ASR in rats and gerbils. For noise-burst prepulses, inhibition was frequency-independent in gerbils in the test range between 4 and 18 kHz. Prepulse inhibition (PPI) by noise-bursts in rats was constant in a comparable range (8–22 kHz), but lower outside this range. Purely temporal aspects of prepulse–startle-interactions were investigated for gap-prepulses focusing mainly on gap duration. While very short gaps had no (rats) or slightly facilitatory (gerbils) influence on the ASR, longer gaps always had a strong inhibitory effect. Inhibition increased with durations up to 75 ms and remained at a high level of inhibition for durations up to 1000 ms for both, rats and gerbils. Determining spectral influences on gap-prepulse inhibition (gap-PPI) revealed that gerbils were unaffected in the limited frequency range tested (4–18 kHz). The more detailed analysis in rats revealed a variety of frequency-dependent effects. Gaps in pure-tone background elicited constant and high inhibition (around 75%) over a broad frequency range (4–32 kHz). For gaps in noise-bands, on the other hand, a clear frequency-dependency was found: inhibition was around 50% at lower frequencies (6–14 kHz) and around 70% at high frequencies (16–20 kHz). This pattern of frequency-dependency in rats was specifically resulting from the inhibitory effect by the gaps, as revealed by detailed analysis of the underlying startle amplitudes. An interaction of temporal and spectral influences, finally, resulted in higher inhibition for 500 ms gaps than for 75 ms gaps at all frequencies tested. Improved prepulse paradigms based on these results are well suited to quantify the consequences of central processing disorders. PMID:27445728

  16. Passive metamaterial-based acoustic holograms in ultrasound energy transfer systems

    NASA Astrophysics Data System (ADS)

    Bakhtiari-Nejad, Marjan; Elnahhas, Ahmed; Hajj, Muhammad R.; Shahab, Shima

    2018-03-01

    Contactless energy transfer (CET) is a technology that is particularly relevant in applications where wired electrical contact is dangerous or impractical. Furthermore, it would enhance the development, use, and reliability of low-power sensors in applications where changing batteries is not practical or may not be a viable option. One CET method that has recently attracted interest is the ultrasonic acoustic energy transfer, which is based on the reception of acoustic waves at ultrasonic frequencies by a piezoelectric receiver. Patterning and focusing the transmitted acoustic energy in space is one of the challenges for enhancing the power transmission and locally charging sensors or devices. We use a mathematically designed passive metamaterial-based acoustic hologram to selectively power an array of piezoelectric receivers using an unfocused transmitter. The acoustic hologram is employed to create a multifocal pressure pattern in the target plane where the receivers are located inside focal regions. We conduct multiphysics simulations in which a single transmitter is used to power multiple receivers with an arbitrary two-dimensional spatial pattern via wave controlling and manipulation, using the hologram. We show that the multi-focal pressure pattern created by the passive acoustic hologram will enhance the power transmission for most receivers.

  17. Nonverbal auditory agnosia with lesion to Wernicke's area.

    PubMed

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  18. Brain bases for auditory stimulus-driven figure-ground segregation.

    PubMed

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D

    2011-01-05

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.

  19. Firing-rate resonances in the peripheral auditory system of the cricket, Gryllus bimaculatus.

    PubMed

    Rau, Florian; Clemens, Jan; Naumov, Victor; Hennig, R Matthias; Schreiber, Susanne

    2015-11-01

    In many communication systems, information is encoded in the temporal pattern of signals. For rhythmic signals that carry information in specific frequency bands, a neuronal system may profit from tuning its inherent filtering properties towards a peak sensitivity in the respective frequency range. The cricket Gryllus bimaculatus evaluates acoustic communication signals of both conspecifics and predators. The song signals of conspecifics exhibit a characteristic pulse pattern that contains only a narrow range of modulation frequencies. We examined individual neurons (AN1, AN2, ON1) in the peripheral auditory system of the cricket for tuning towards specific modulation frequencies by assessing their firing-rate resonance. Acoustic stimuli with a swept-frequency envelope allowed an efficient characterization of the cells' modulation transfer functions. Some of the examined cells exhibited tuned band-pass properties. Using simple computational models, we demonstrate how different, cell-intrinsic or network-based mechanisms such as subthreshold resonances, spike-triggered adaptation, as well as an interplay of excitation and inhibition can account for the experimentally observed firing-rate resonances. Therefore, basic neuronal mechanisms that share negative feedback as a common theme may contribute to selectivity in the peripheral auditory pathway of crickets that is designed towards mate recognition and predator avoidance.

  20. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  1. A practical method of predicting the loudness of complex electrical stimuli

    NASA Astrophysics Data System (ADS)

    McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.

    2003-04-01

    The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.

  2. Can you hear me now? Range-testing a submerged passive acoustic receiver array in a Caribbean coral reef habitat

    USGS Publications Warehouse

    Selby, Thomas H.; Hart, Kristen M.; Fujisaki, Ikuko; Smith, Brian J.; Pollock, Clayton J; Hillis-Star, Zandy M; Lundgren, Ian; Oli, Madan K.

    2016-01-01

    Submerged passive acoustic technology allows researchers to investigate spatial and temporal movement patterns of many marine and freshwater species. The technology uses receivers to detect and record acoustic transmissions emitted from tags attached to an individual. Acoustic signal strength naturally attenuates over distance, but numerous environmental variables also affect the probability a tag is detected. Knowledge of receiver range is crucial for designing acoustic arrays and analyzing telemetry data. Here, we present a method for testing a relatively large-scale receiver array in a dynamic Caribbean coastal environment intended for long-term monitoring of multiple species. The U.S. Geological Survey and several academic institutions in collaboration with resource management at Buck Island Reef National Monument (BIRNM), off the coast of St. Croix, recently deployed a 52 passive acoustic receiver array. We targeted 19 array-representative receivers for range-testing by submersing fixed delay interval range-testing tags at various distance intervals in each cardinal direction from a receiver for a minimum of an hour. Using a generalized linear mixed model (GLMM), we estimated the probability of detection across the array and assessed the effect of water depth, habitat, wind, temperature, and time of day on the probability of detection. The predicted probability of detection across the entire array at 100 m distance from a receiver was 58.2% (95% CI: 44.0–73.0%) and dropped to 26.0% (95% CI: 11.4–39.3%) 200 m from a receiver indicating a somewhat constrained effective detection range. Detection probability varied across habitat classes with the greatest effective detection range occurring in homogenous sand substrate and the smallest in high rugosity reef. Predicted probability of detection across BIRNM highlights potential gaps in coverage using the current array as well as limitations of passive acoustic technology within a complex coral reef environment.

  3. Combined electric and acoustic hearing performance with Zebra® speech processor: speech reception, place, and temporal coding evaluation.

    PubMed

    Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J

    2013-06-01

    To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.

  4. Acoustic richness modulates the neural networks supporting intelligible speech processing.

    PubMed

    Lee, Yune-Sang; Min, Nam Eun; Wingfield, Arthur; Grossman, Murray; Peelle, Jonathan E

    2016-03-01

    The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human listeners cope with different degrees of acoustic richness during auditory sentence comprehension. Twenty-six healthy young adults underwent scanning while hearing sentences that varied in acoustic richness (high vs. low spectral detail) and syntactic complexity (subject-relative vs. object-relative center-embedded clause structures). We manipulated acoustic richness by presenting the stimuli as unprocessed full-spectrum speech, or noise-vocoded with 24 channels. Importantly, although the vocoded sentences were spectrally impoverished, all sentences were highly intelligible. These manipulations allowed us to test how intelligible speech processing was affected by orthogonal linguistic and acoustic demands. Acoustically rich speech showed stronger activation than acoustically less-detailed speech in a bilateral temporoparietal network with more pronounced activity in the right hemisphere. By contrast, listening to sentences with greater syntactic complexity resulted in increased activation of a left-lateralized network including left posterior lateral temporal cortex, left inferior frontal gyrus, and left dorsolateral prefrontal cortex. Significant interactions between acoustic richness and syntactic complexity occurred in left supramarginal gyrus, right superior temporal gyrus, and right inferior frontal gyrus, indicating that the regions recruited for syntactic challenge differed as a function of acoustic properties of the speech. Our findings suggest that the neural systems involved in speech perception are finely tuned to the type of information available, and that reducing the richness of the acoustic signal dramatically alters the brain's response to spoken language, even when intelligibility is high. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Spontaneous generalization of abstract multimodal patterns in young domestic chicks.

    PubMed

    Versace, Elisabetta; Spierings, Michelle J; Caffini, Matteo; Ten Cate, Carel; Vallortigara, Giorgio

    2017-05-01

    From the early stages of life, learning the regularities associated with specific objects is crucial for making sense of experiences. Through filial imprinting, young precocial birds quickly learn the features of their social partners by mere exposure. It is not clear though to what extent chicks can extract abstract patterns of the visual and acoustic stimuli present in the imprinting object, and how they combine them. To investigate this issue, we exposed chicks (Gallus gallus) to three days of visual and acoustic imprinting, using either patterns with two identical items or patterns with two different items, presented visually, acoustically or in both modalities. Next, chicks were given a choice between the familiar and the unfamiliar pattern, present in either the multimodal, visual or acoustic modality. The responses to the novel stimuli were affected by their imprinting experience, and the effect was stronger for chicks imprinted with multimodal patterns than for the other groups. Interestingly, males and females adopted a different strategy, with males more attracted by unfamiliar patterns and females more attracted by familiar patterns. Our data show that chicks can generalize abstract patterns by mere exposure through filial imprinting and that multimodal stimulation is more effective than unimodal stimulation for pattern learning.

  6. Passive acoustic monitoring of bed load for fluvial applications

    USDA-ARS?s Scientific Manuscript database

    The sediment transported as bed load in streams and rivers is notoriously difficult to monitor cheaply and accurately. Passive acoustic methods are relatively simple, inexpensive, and provide spatial integration along with high temporal resolution. In 1963 work began on monitoring emissions from par...

  7. Occurrence Frequencies of Acoustic Patterns of Vocal Fry in American English Speakers.

    PubMed

    Abdelli-Beruh, Nassima B; Drugman, Thomas; Red Owl, R H

    2016-11-01

    The goal of this study was to analyze the occurrence frequencies of three individual acoustic patterns (A, B, C) and of vocal fry overall (A + B + C) as a function of gender, word position in the sentence (Not Last Word vs. Last Word), and sentence length (number of words in a sentence). This is an experimental design. Twenty-five male and 29 female American English (AE) speakers read the Grandfather Passage. The recordings were processed by a Matlab toolbox designed for the analysis and detection of creaky segments, automatically identified using the Kane-Drugman algorithm. The experiment produced subsamples of outcomes, three that reflect a single, discrete acoustic pattern (A, B, or C) and the fourth that reflects the occurrence frequency counts of Vocal Fry Overall without regard to any specific pattern. Zero-truncated Poisson regression analyses were conducted with Gender and Word Position as predictors and Sentence Length as a covariate. The results of the present study showed that the occurrence frequencies of the three acoustic patterns and vocal fry overall (A + B + C) are greatest at the end of sentences but are unaffected by sentence length. The findings also reveal that AE female speakers exhibit Pattern C significantly more frequently than Pattern B, and the converse holds for AE male speakers. Future studies are needed to confirm such outcomes, assess the perceptual salience of these acoustic patterns, and determine the physiological correlates of these acoustic patterns. The findings have implications for the design of new excitation models of vocal fry. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. Internal Wave Impact on the Performance of a Hypothetical Mine Hunting Sonar

    DTIC Science & Technology

    2014-10-01

    time steps) to simulate the propagation of the internal wave field through the mine field. Again the transmission loss and acoustic signal strength...dependent internal wave perturbed sound speed profile was evaluated by calculating the temporal variability of the signal excess (SE) of acoustic...internal wave perturbation of the sound speed profile, was calculated for a limited sound speed field time section. Acoustic signals were projected

  9. A Numerical Investigation of Turbine Noise Source Hierarchy and Its Acoustic Transmission Characteristics: Proof-of-Concept Progress

    NASA Technical Reports Server (NTRS)

    VanZante, Dale; Envia, Edmane

    2008-01-01

    A CFD-based simulation of single-stage turbine was done using the TURBO code to assess its viability for determining acoustic transmission through blade rows. Temporal and spectral analysis of the unsteady pressure data from the numerical simulations showed the allowable Tyler-Sofrin modes that are consistent with expectations. This indicated that high-fidelity acoustic transmission calculations are feasible with TURBO.

  10. An fMRI examination of the effects of acoustic-phonetic and lexical competition on access to the lexical-semantic network.

    PubMed

    Minicucci, Domenic; Guediche, Sara; Blumstein, Sheila E

    2013-08-01

    The current study explored how factors of acoustic-phonetic and lexical competition affect access to the lexical-semantic network during spoken word recognition. An auditory semantic priming lexical decision task was presented to subjects while in the MR scanner. Prime-target pairs consisted of prime words with the initial voiceless stop consonants /p/, /t/, and /k/ followed by word and nonword targets. To examine the neural consequences of lexical and sound structure competition, primes either had voiced minimal pair competitors or they did not, and they were either acoustically modified to be poorer exemplars of the voiceless phonetic category or not. Neural activation associated with semantic priming (Unrelated-Related conditions) revealed a bilateral fronto-temporo-parietal network. Within this network, clusters in the left insula/inferior frontal gyrus (IFG), left superior temporal gyrus (STG), and left posterior middle temporal gyrus (pMTG) showed sensitivity to lexical competition. The pMTG also demonstrated sensitivity to acoustic modification, and the insula/IFG showed an interaction between lexical competition and acoustic modification. These findings suggest the posterior lexical-semantic network is modulated by both acoustic-phonetic and lexical structure, and that the resolution of these two sources of competition recruits frontal structures. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Low Frequency Vibrations Disrupt Left-Right Patterning in the Xenopus Embryo

    PubMed Central

    Vandenberg, Laura N.; Pennarola, Brian W.; Levin, Michael

    2011-01-01

    The development of consistent left-right (LR) asymmetry across phyla is a fascinating question in biology. While many pharmacological and molecular approaches have been used to explore molecular mechanisms, it has proven difficult to exert precise temporal control over functional perturbations. Here, we took advantage of acoustical vibration to disrupt LR patterning in Xenopus embryos during tightly-circumscribed periods of development. Exposure to several low frequencies induced specific randomization of three internal organs (heterotaxia). Investigating one frequency (7 Hz), we found two discrete periods of sensitivity to vibration; during the first period, vibration affected the same LR pathway as nocodazole, while during the second period, vibration affected the integrity of the epithelial barrier; both are required for normal LR patterning. Our results indicate that low frequency vibrations disrupt two steps in the early LR pathway: the orientation of the LR axis with the other two axes, and the amplification/restriction of downstream LR signals to asymmetric organs. PMID:21826245

  12. Oscillating load-induced acoustic emission in laboratory experiment

    USGS Publications Warehouse

    Ponomarev, Alexander; Lockner, David A.; Stroganova, S.; Stanchits, S.; Smirnov, Vladmir

    2010-01-01

    Spatial and temporal patterns of acoustic emission (AE) were studied. A pre-fractured cylinder of granite was loaded in a triaxial machine at 160 MPa confining pressure until stick-slip events occurred. The experiments were conducted at a constant strain rate of 10−7 s−1 that was modulated by small-amplitude sinusoidal oscillations with periods of 175 and 570 seconds. Amplitude of the oscillations was a few percent of the total load and was intended to simulate periodic loading observed in nature (e.g., earth tides or other sources). An ultrasonic acquisition system with 13 piezosensors recorded acoustic emissions that were generated during deformation of the sample. We observed a correlation between AE response and sinusoidal loading. The effect was more pronounced for higher frequency of the modulating force. A time-space spectral analysis for a “point” process was used to investigate details of the periodic AE components. The main result of the study was the correlation of oscillations of acoustic activity synchronized with the applied oscillating load. The intensity of the correlated AE activity was most pronounced in the “aftershock” sequences that followed large-amplitude AE events. We suggest that this is due to the higher strain-sensitivity of the failure area when the sample is in a transient, unstable mode. We also found that the synchronization of AE activity with the oscillating external load nearly disappeared in the period immediately after the stick-slip events and gradually recovered with further loading.

  13. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  14. Sensor apparatus

    DOEpatents

    Deason, Vance A [Idaho Falls, ID; Telschow, Kenneth L [Idaho Falls, ID

    2009-12-22

    A sensor apparatus and method for detecting an environmental factor is shown that includes an acoustic device that has a characteristic resonant vibrational frequency and mode pattern when exposed to a source of acoustic energy and, futher, when exposed to an environmental factor, produces a different resonant vibrational frequency and/or mode pattern when exposed to the same source of acoustic energy.

  15. Critical Song Features for Auditory Pattern Recognition in Crickets

    PubMed Central

    Meckenhäuser, Gundula; Hennig, R. Matthias; Nawrot, Martin P.

    2013-01-01

    Many different invertebrate and vertebrate species use acoustic communication for pair formation. In the cricket Gryllus bimaculatus, females recognize their species-specific calling song and localize singing males by positive phonotaxis. The song pattern of males has a clear structure consisting of brief and regular pulses that are grouped into repetitive chirps. Information is thus present on a short and a long time scale. Here, we ask which structural features of the song critically determine the phonotactic performance. To this end we employed artificial neural networks to analyze a large body of behavioral data that measured females’ phonotactic behavior under systematic variation of artificially generated song patterns. In a first step we used four non-redundant descriptive temporal features to predict the female response. The model prediction showed a high correlation with the experimental results. We used this behavioral model to explore the integration of the two different time scales. Our result suggested that only an attractive pulse structure in combination with an attractive chirp structure reliably induced phonotactic behavior to signals. In a further step we investigated all feature sets, each one consisting of a different combination of eight proposed temporal features. We identified feature sets of size two, three, and four that achieve highest prediction power by using the pulse period from the short time scale plus additional information from the long time scale. PMID:23437054

  16. Directional radiation pattern in structural-acoustic coupled system

    NASA Astrophysics Data System (ADS)

    Seo, Hee-Seon; Kim, Yang-Hann

    2005-07-01

    In this paper we demonstrate the possibility of designing a radiator using structural-acoustic interaction by predicting the pressure distribution and radiation pattern of a structural-acoustic coupling system that is composed by a wall and two spaces. If a wall separates spaces, then the wall's role in transporting the acoustic characteristics of the spaces is important. The spaces can be categorized as bounded finite space and unbounded infinite space. The wall considered in this study composes two plates and an opening, and the wall separates one space that is highly reverberant and the other that is unbounded without any reflection. This rather hypothetical circumstance is selected to study the general coupling problem between the finite and infinite acoustic domains. We developed an equation that predicts the energy distribution and energy flow in the two spaces separated by a wall, and its computational examples are presented. Three typical radiation patterns that include steered, focused, and omnidirected are presented. A designed radiation pattern is also presented by using the optimal design algorithm.

  17. Hierarchical organization in the temporal structure of infant-direct speech and song.

    PubMed

    Falk, Simone; Kello, Christopher T

    2017-06-01

    Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech. Events were defined as peaks in the amplitude envelope, and clusters of various sizes related to periods of acoustic speech energy at varying timescales. Infant-directed speech and song clearly showed greater event clustering compared with adult-directed registers, at multiple timescales of hundreds of milliseconds to tens of seconds. We discuss the relation of this newly discovered acoustic property to temporal variability in linguistic units and its potential implications for parent-infant communication and infants learning the hierarchical structures of speech and language. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Complex auditory behaviour emerges from simple reactive steering

    NASA Astrophysics Data System (ADS)

    Hedwig, Berthold; Poulet, James F. A.

    2004-08-01

    The recognition and localization of sound signals is fundamental to acoustic communication. Complex neural mechanisms are thought to underlie the processing of species-specific sound patterns even in animals with simple auditory pathways. In female crickets, which orient towards the male's calling song, current models propose pattern recognition mechanisms based on the temporal structure of the song. Furthermore, it is thought that localization is achieved by comparing the output of the left and right recognition networks, which then directs the female to the pattern that most closely resembles the species-specific song. Here we show, using a highly sensitive method for measuring the movements of female crickets, that when walking and flying each sound pulse of the communication signal releases a rapid steering response. Thus auditory orientation emerges from reactive motor responses to individual sound pulses. Although the reactive motor responses are not based on the song structure, a pattern recognition process may modulate the gain of the responses on a longer timescale. These findings are relevant to concepts of insect auditory behaviour and to the development of biologically inspired robots performing cricket-like auditory orientation.

  19. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    PubMed

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  20. Acoustic (loudspeaker) facial EMG monitoring: II. Use of evoked EMG activity during acoustic neuroma resection.

    PubMed

    Prass, R L; Kinney, S E; Hardy, R W; Hahn, J F; Lüders, H

    1987-12-01

    Facial electromyographic (EMG) activity was continuously monitored via loudspeaker during eleven translabyrinthine and nine suboccipital consecutive unselected acoustic neuroma resections. Ipsilateral facial EMG activity was synchronously recorded on the audio channels of operative videotapes, which were retrospectively reviewed in order to allow detailed evaluation of the potential benefit of various acoustic EMG patterns in the performance of specific aspects of acoustic neuroma resection. The use of evoked facial EMG activity was classified and described. Direct local mechanical (surgical) stimulation and direct electrical stimulation were of benefit in the localization and/or delineation of the facial nerve contour. Burst and train acoustic patterns of EMG activity appeared to indicate surgical trauma to the facial nerve that would not have been appreciated otherwise. Early results of postoperative facial function of monitored patients are presented, and the possible value of burst and train acoustic EMG activity patterns in the intraoperative assessment of facial nerve function is discussed. Acoustic facial EMG monitoring appears to provide a potentially powerful surgical tool for delineation of the facial nerve contour, the ongoing use of which may lead to continued improvement in facial nerve function preservation through modification of dissection strategy.

  1. Acoustic beam steering by light refraction: illustration with directivity patterns of a tilted volume photoacoustic source.

    PubMed

    Raetz, Samuel; Dehoux, Thomas; Perton, Mathieu; Audoin, Bertrand

    2013-12-01

    The symmetry of a thermoelastic source resulting from laser absorption can be broken when the direction of light propagation in an elastic half-space is inclined relatively to the surface. This leads to an asymmetry of the directivity patterns of both compressional and shear acoustic waves. In contrast to classical surface acoustic sources, the tunable volume source allows one to take advantage of the mode conversion at the surface to control the directivity of specific modes. Physical interpretations of the evolution of the directivity patterns with the increasing light angle of incidence and of the relations between the preferential directions of compressional- and shear-wave emission are proposed. In order to compare calculated directivity patterns with measurements of normal displacement amplitudes performed on plates, a procedure is proposed to transform the directivity patterns into pseudo-directivity patterns representative of the experimental conditions. The comparison of the theoretical with measured pseudo-directivity patterns demonstrates the ability to enhance bulk-wave amplitudes and to steer specific bulk acoustic modes by adequately tuning light refraction.

  2. Independence of Early Speech Processing from Word Meaning

    PubMed Central

    Travis, Katherine E.; Leonard, Matthew K.; Chan, Alexander M.; Torres, Christina; Sizemore, Marisa L.; Qu, Zhe; Eskandar, Emad; Dale, Anders M.; Elman, Jeffrey L.; Cash, Sydney S.; Halgren, Eric

    2013-01-01

    We combined magnetoencephalography (MEG) with magnetic resonance imaging and electrocorticography to separate in anatomy and latency 2 fundamental stages underlying speech comprehension. The first acoustic-phonetic stage is selective for words relative to control stimuli individually matched on acoustic properties. It begins ∼60 ms after stimulus onset and is localized to middle superior temporal cortex. It was replicated in another experiment, but is strongly dissociated from the response to tones in the same subjects. Within the same task, semantic priming of the same words by a related picture modulates cortical processing in a broader network, but this does not begin until ∼217 ms. The earlier onset of acoustic-phonetic processing compared with lexico-semantic modulation was significant in each individual subject. The MEG source estimates were confirmed with intracranial local field potential and high gamma power responses acquired in 2 additional subjects performing the same task. These recordings further identified sites within superior temporal cortex that responded only to the acoustic-phonetic contrast at short latencies, or the lexico-semantic at long. The independence of the early acoustic-phonetic response from semantic context suggests a limited role for lexical feedback in early speech perception. PMID:22875868

  3. Observation and Simulation of Microseisms Offshore Ireland

    NASA Astrophysics Data System (ADS)

    Le Pape, Florian; Bean, Chris; Craig, David; Jousset, Philippe; Donne, Sarah; Möllhoff, Martin

    2017-04-01

    Although more and more used in seismic imagery, ocean induced ambient seismic noise is still not so well understood, particularly how the signal propagates from ocean to land. Between January and September 2016, 10 broadband Ocean Bottom Seismometers (OBSs) stations, including acoustic sensors (hydrophone), were deployed across the shelf offshore Donegal and out into the Rockall Trough. The preliminary results show spatial and temporal variability in the ocean generated seismic noise which holds information about changes in the generation source process, including meteorological information, but also in the geological structure. In addition to the collected OBS data, numerical simulations of acoustic/seismic wave propagation are also considered in order to study the spatio-temporal variation of the broadband acoustic wavefield and its connection with the measured seismic wavefield in the region. Combination of observations and simulations appears significant to better understand what control the acoustic/seismic coupling at the sea floor as well as the effect of the water column and sediments thickness on signal propagation. Ocean generated seismic ambient noise recorded at the seafloor appears to behave differently in deep and shallow water and 3D simulations of acoustic/seismic wave propagation look particularly promising for reconciling deep ocean, shelf and land seismic observations.

  4. Acoustic Signal Processing in Photorefractive Optical Systems.

    NASA Astrophysics Data System (ADS)

    Zhou, Gan

    This thesis discusses applications of the photorefractive effect in the context of acoustic signal processing. The devices and systems presented here illustrate the ideas and optical principles involved in holographic processing of acoustic information. The interest in optical processing stems from the similarities between holographic optical systems and contemporary models for massively parallel computation, in particular, neural networks. An initial step in acoustic processing is the transformation of acoustic signals into relevant optical forms. A fiber-optic transducer with photorefractive readout transforms acoustic signals into optical images corresponding to their short-time spectrum. The device analyzes complex sound signals and interfaces them with conventional optical correlators. The transducer consists of 130 multimode optical fibers sampling the spectral range of 100 Hz to 5 kHz logarithmically. A physical model of the human cochlea can help us understand some characteristics of human acoustic transduction and signal representation. We construct a life-sized cochlear model using elastic membranes coupled with two fluid-filled chambers, and use a photorefractive novelty filter to investigate its response. The detection sensitivity is determined to be 0.3 angstroms per root Hz at 2 kHz. Qualitative agreement is found between the model response and physiological data. Delay lines map time-domain signals into space -domain and permit holographic processing of temporal information. A parallel optical delay line using dynamic beam coupling in a rotating photorefractive crystal is presented. We experimentally demonstrate a 64 channel device with 0.5 seconds of time-delay and 167 Hz bandwidth. Acoustic signal recognition is described in a photorefractive system implementing the time-delay neural network model. The system consists of a photorefractive optical delay-line and a holographic correlator programmed in a LiNbO_3 crystal. We demonstrate the recognition of synthesized chirps as well as spoken words. A photorefractive ring resonator containing an optical delay line can learn temporal information through self-organization. We experimentally investigate a system that learns by itself and picks out the most-frequently -presented signals from the input. We also give results demonstrating the separation of two orthogonal temporal signals into two competing ring resonators.

  5. Faraday Wave Turbulence on a Spherical Liquid Shell

    NASA Technical Reports Server (NTRS)

    Holt, R. Glynn; Trinh, Eugene H.

    1996-01-01

    Millimeter-radius liquid shells are acoustically levitated in an ultrasonic field. Capillary waves are observed on the shells. At low energies (minimal acoustic amplitude, thick shell) a resonance is observed between the symmetric and antisymmetric thin film oscillation modes. At high energies (high acoustic pressure, thin shell) the shell becomes fully covered with high-amplitude waves. Temporal spectra of scattered light from the shell in this regime exhibit a power-law decay indicative of turbulence.

  6. Theta band oscillations reflect more than entrainment: behavioral and neural evidence demonstrates an active chunking process.

    PubMed

    Teng, Xiangbin; Tian, Xing; Doelling, Keith; Poeppel, David

    2017-10-17

    Parsing continuous acoustic streams into perceptual units is fundamental to auditory perception. Previous studies have uncovered a cortical entrainment mechanism in the delta and theta bands (~1-8 Hz) that correlates with formation of perceptual units in speech, music, and other quasi-rhythmic stimuli. Whether cortical oscillations in the delta-theta bands are passively entrained by regular acoustic patterns or play an active role in parsing the acoustic stream is debated. Here, we investigate cortical oscillations using novel stimuli with 1/f modulation spectra. These 1/f signals have no rhythmic structure but contain information over many timescales because of their broadband modulation characteristics. We chose 1/f modulation spectra with varying exponents of f, which simulate the dynamics of environmental noise, speech, vocalizations, and music. While undergoing magnetoencephalography (MEG) recording, participants listened to 1/f stimuli and detected embedded target tones. Tone detection performance varied across stimuli of different exponents and can be explained by local signal-to-noise ratio computed using a temporal window around 200 ms. Furthermore, theta band oscillations, surprisingly, were observed for all stimuli, but robust phase coherence was preferentially displayed by stimuli with exponents 1 and 1.5. We constructed an auditory processing model to quantify acoustic information on various timescales and correlated the model outputs with the neural results. We show that cortical oscillations reflect a chunking of segments, > 200 ms. These results suggest an active auditory segmentation mechanism, complementary to entrainment, operating on a timescale of ~200 ms to organize acoustic information. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.

  8. Development of on-off spiking in superior paraolivary nucleus neurons of the mouse

    PubMed Central

    Felix, Richard A.; Vonderschen, Katrin; Berrebi, Albert S.

    2013-01-01

    The superior paraolivary nucleus (SPON) is a prominent cell group in the auditory brain stem that has been increasingly implicated in representing temporal sound structure. Although SPON neurons selectively respond to acoustic signals important for sound periodicity, the underlying physiological specializations enabling these responses are poorly understood. We used in vitro and in vivo recordings to investigate how SPON neurons develop intrinsic cellular properties that make them well suited for encoding temporal sound features. In addition to their hallmark rebound spiking at the stimulus offset, SPON neurons were characterized by spiking patterns termed onset, adapting, and burst in response to depolarizing stimuli in vitro. Cells with burst spiking had some morphological differences compared with other SPON neurons and were localized to the dorsolateral region of the nucleus. Both membrane and spiking properties underwent strong developmental regulation, becoming more temporally precise with age for both onset and offset spiking. Single-unit recordings obtained in young mice demonstrated that SPON neurons respond with temporally precise onset spiking upon tone stimulation in vivo, in addition to the typical offset spiking. Taken together, the results of the present study demonstrate that SPON neurons develop sharp on-off spiking, which may confer sensitivity to sound amplitude modulations or abrupt sound transients. These findings are consistent with the proposed involvement of the SPON in the processing of temporal sound structure, relevant for encoding communication cues. PMID:23515791

  9. Vocalization frequency and duration are coded in separate hindbrain nuclei.

    PubMed

    Chagnaud, Boris P; Baker, Robert; Bass, Andrew H

    2011-06-14

    Temporal patterning is an essential feature of neural networks producing precisely timed behaviours such as vocalizations that are widely used in vertebrate social communication. Here we show that intrinsic and network properties of separate hindbrain neuronal populations encode the natural call attributes of frequency and duration in vocal fish. Intracellular structure/function analyses indicate that call duration is encoded by a sustained membrane depolarization in vocal prepacemaker neurons that innervate downstream pacemaker neurons. Pacemaker neurons, in turn, encode call frequency by rhythmic, ultrafast oscillations in their membrane potential. Pharmacological manipulations show prepacemaker activity to be independent of pacemaker function, thus accounting for natural variation in duration which is the predominant feature distinguishing call types. Prepacemaker neurons also innervate key hindbrain auditory nuclei thereby effectively serving as a call-duration corollary discharge. We propose that premotor compartmentalization of neurons coding distinct acoustic attributes is a fundamental trait of hindbrain vocal pattern generators among vertebrates.

  10. Vocalization frequency and duration are coded in separate hindbrain nuclei

    PubMed Central

    Chagnaud, Boris P.; Baker, Robert; Bass, Andrew H.

    2011-01-01

    Temporal patterning is an essential feature of neural networks producing precisely timed behaviours such as vocalizations that are widely used in vertebrate social communication. Here we show that intrinsic and network properties of separate hindbrain neuronal populations encode the natural call attributes of frequency and duration in vocal fish. Intracellular structure/function analyses indicate that call duration is encoded by a sustained membrane depolarization in vocal prepacemaker neurons that innervate downstream pacemaker neurons. Pacemaker neurons, in turn, encode call frequency by rhythmic, ultrafast oscillations in their membrane potential. Pharmacological manipulations show prepacemaker activity to be independent of pacemaker function, thus accounting for natural variation in duration which is the predominant feature distinguishing call types. Prepacemaker neurons also innervate key hindbrain auditory nuclei thereby effectively serving as a call-duration corollary discharge. We propose that premotor compartmentalization of neurons coding distinct acoustic attributes is a fundamental trait of hindbrain vocal pattern generators among vertebrates. PMID:21673667

  11. Deep learning on temporal-spectral data for anomaly detection

    NASA Astrophysics Data System (ADS)

    Ma, King; Leung, Henry; Jalilian, Ehsan; Huang, Daniel

    2017-05-01

    Detecting anomalies is important for continuous monitoring of sensor systems. One significant challenge is to use sensor data and autonomously detect changes that cause different conditions to occur. Using deep learning methods, we are able to monitor and detect changes as a result of some disturbance in the system. We utilize deep neural networks for sequence analysis of time series. We use a multi-step method for anomaly detection. We train the network to learn spectral and temporal features from the acoustic time series. We test our method using fiber-optic acoustic data from a pipeline.

  12. Seasonal and Ontogenetic Changes in Movement Patterns of Sixgill Sharks

    PubMed Central

    Andrews, Kelly S.; Williams, Greg D.; Levin, Phillip S.

    2010-01-01

    Background Understanding movement patterns is fundamental to population and conservation biology. The way an animal moves through its environment influences the dynamics of local populations and will determine how susceptible it is to natural or anthropogenic perturbations. It is of particular interest to understand the patterns of movement for species which are susceptible to human activities (e.g. fishing), or that exert a large influence on community structure, such as sharks. Methodology/Principal Findings We monitored the patterns of movement of 34 sixgill sharks Hexanchus griseus using two large-scale acoustic arrays inside and outside Puget Sound, Washington, USA. Sixgill sharks were residents in Puget Sound for up to at least four years before making large movements out of the estuary. Within Puget Sound, sixgills inhabited sites for several weeks at a time and returned to the same sites annually. Across four years, sixgills had consistent seasonal movements in which they moved to the north from winter to spring and moved to the south from summer to fall. Just prior to leaving Puget Sound, sixgills altered their behavior and moved twice as fast among sites. Nineteen of the thirty-four sixgills were detected leaving Puget Sound for the outer coast. Three of these sharks returned to Puget Sound. Conclusions/Significance For most large marine predators, we have a limited understanding of how they move through their environment, and this clouds our ability to successfully manage their populations and their communities. With detailed movement information, such as that being uncovered with acoustic monitoring, we can begin to quantify the spatial and temporal impacts of large predators within the framework of their ecosystems. PMID:20838617

  13. Multimodal far-field acoustic radiation pattern: An approximate equation

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1977-01-01

    The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.

  14. Experimental demonstration of topologically protected efficient sound propagation in an acoustic waveguide network

    NASA Astrophysics Data System (ADS)

    Wei, Qi; Tian, Ye; Zuo, Shu-Yu; Cheng, Ying; Liu, Xiao-Jun

    2017-03-01

    Acoustic topological states support sound propagation along the boundary in a one-way direction with inherent robustness against defects and disorders, leading to the revolution of the manipulation on acoustic waves. A variety of acoustic topological states relying on circulating fluid, chiral coupling, or temporal modulation have been proposed theoretically. However, experimental demonstration has so far remained a significant challenge, due to the critical limitations such as structural complexity and high losses. Here, we experimentally demonstrate an acoustic anomalous Floquet topological insulator in a waveguide network. The acoustic gapless edge states can be found in the band gap when the waveguides are strongly coupled. The scheme features simple structure and high-energy throughput, leading to the experimental demonstration of efficient and robust topologically protected sound propagation along the boundary. The proposal may offer a unique, promising application for design of acoustic devices in acoustic guiding, switching, isolating, filtering, etc.

  15. Recurring patterns in the songs of humpback whales (Megaptera novaeangliae).

    PubMed

    Green, Sean R; Mercado, Eduardo; Pack, Adam A; Herman, Louis M

    2011-02-01

    Humpback whales, unlike most mammalian species, learn new songs as adults. Populations of singers progressively and collectively change the sounds and patterns within their songs throughout their lives and across generations. In this study, humpback whale songs recorded in Hawaii from 1985 to 1995 were analyzed using self-organizing maps (SOMs) to classify the sounds within songs, and to identify sound patterns that were present across multiple years. These analyses supported the hypothesis that recurring, persistent patterns exist within whale songs, and that these patterns are defined at least in part by acoustic relationships between adjacent sounds within songs. Sound classification based on acoustic differences between adjacent sounds yielded patterns within songs that were more consistent from year to year than classifications based on the properties of single sounds. Maintenance of fixed ratios of acoustic modulation across sounds, despite large variations in individual sounds, suggests intrinsic constraints on how sounds change within songs. Such acoustically invariant cues may enable whales to recognize and assess variations in songs despite propagation-related distortion of individual sounds and yearly changes in songs. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Hydrodynamic Model of Spatio-Temporal Evolution of Two-Plasmon Decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimitrijevic, D. R.; Maluckov, A. A.

    A hydrodynamic model of two-plasmon decay in a homogeneous plasma slab near the quarter-critical density is constructed in order to gain better insight into the spatio-temporal evolution of the daughter electron plasma waves in plasma in the course of the instability. The influence of laser and plasma parameters on the evolution of the amplitudes of the participating waves is discussed. The secondary coupling of two daughter electron plasma waves with an ion-acoustic wave is assumed to be the principal mechanism of saturation of the instability. The impact of the inherently nonresonant nature of this secondary coupling on the development ofmore » TPD is investigated and it is shown to significantly influence the electron plasma wave dynamics. Its inclusion leads to nonuniformity of the spatial profile of the instability and causes the burst-like pattern of the instability development, which should result in the burst-like hot-electron production in homogeneous plasma.« less

  17. Changes in the Response Properties of Inferior Colliculus Neurons Relating to Tinnitus

    PubMed Central

    Berger, Joel I.; Coomber, Ben; Wells, Tobias T.; Wallace, Mark N.; Palmer, Alan R.

    2014-01-01

    Tinnitus is often identified in animal models by using the gap prepulse inhibition of acoustic startle. Impaired gap detection following acoustic over-exposure (AOE) is thought to be caused by tinnitus “filling in” the gap, thus, reducing its salience. This presumably involves altered perception, and could conceivably be caused by changes at the level of the neocortex, i.e., cortical reorganization. Alternatively, reduced gap detection ability might reflect poorer temporal processing in the brainstem, caused by AOE; in which case, impaired gap detection would not be a reliable indicator of tinnitus. We tested the latter hypothesis by examining gap detection in inferior colliculus (IC) neurons following AOE. Seven of nine unilaterally noise-exposed guinea pigs exhibited behavioral evidence of tinnitus. In these tinnitus animals, neural gap detection thresholds (GDTs) in the IC significantly increased in response to broadband noise stimuli, but not to pure tones or narrow-band noise. In addition, when IC neurons were sub-divided according to temporal response profile (onset vs. sustained firing patterns), we found a significant increase in the proportion of onset-type responses after AOE. Importantly, however, GDTs were still considerably shorter than gap durations commonly used in objective behavioral tests for tinnitus. These data indicate that the neural changes observed in the IC are insufficient to explain deficits in behavioral gap detection that are commonly attributed to tinnitus. The subtle changes in IC neuron response profiles following AOE warrant further investigation. PMID:25346722

  18. Tunable Nanowire Patterning Using Standing Surface Acoustic Waves

    PubMed Central

    Chen, Yuchao; Ding, Xiaoyun; Lin, Sz-Chin Steven; Yang, Shikuan; Huang, Po-Hsun; Nama, Nitesh; Zhao, Yanhui; Nawaz, Ahmad Ahsan; Guo, Feng; Wang, Wei; Gu, Yeyi; Mallouk, Thomas E.; Huang, Tony Jun

    2014-01-01

    Patterning of nanowires in a controllable, tunable manner is important for the fabrication of functional nanodevices. Here we present a simple approach for tunable nanowire patterning using standing surface acoustic waves (SSAW). This technique allows for the construction of large-scale nanowire arrays with well-controlled patterning geometry and spacing within 5 seconds. In this approach, SSAWs were generated by interdigital transducers (IDTs), which induced a periodic alternating current (AC) electric field on the piezoelectric substrate and consequently patterned metallic nanowires in suspension. The patterns could be deposited onto the substrate after the liquid evaporated. By controlling the distribution of the SSAW field, metallic nanowires were assembled into different patterns including parallel and perpendicular arrays. The spacing of the nanowire arrays could be tuned by controlling the frequency of the surface acoustic waves. Additionally, we observed 3D spark-shape nanowire patterns in the SSAW field. The SSAW-based nanowire-patterning technique presented here possesses several advantages over alternative patterning approaches, including high versatility, tunability, and efficiency, making it promising for device applications. PMID:23540330

  19. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound

    PubMed Central

    Lopez-Haro, S. A.; Leija, L.

    2016-01-01

    Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801

  20. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  1. Acoustic measurement of suspensions of clay and silt particles using single frequency attenuation and backscatter

    USDA-ARS?s Scientific Manuscript database

    The use of ultrasonic acoustic technology to measure the concentration of fine suspended sediments has the potential to greatly increase the temporal and spatial resolution of sediment measurements while reducing the need for personnel to be present at gauging stations during storm events. The conv...

  2. Pre-earthquake signatures in atmosphere/ionosphere and their potential for short-term earthquake forecasting. Case studies for 2015

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Davidenko, Dmitry; Hernández-Pajares, Manuel; García-Rigo, Alberto; Petrrov, Leonid; Hatzopoulos, Nikolaos; Kafatos, Menas

    2016-04-01

    We are conducting validation studies on temporal-spatial pattern of pre-earthquake signatures in atmosphere and ionosphere associated with M>7 earthquakes in 2015. Our approach is based on the Lithosphere Atmosphere Ionosphere Coupling (LAIC) physical concept integrated with Multi-sensor-networking analysis (MSNA) of several non-correlated observations that can potentially yield predictive information. In this study we present two type of results: 1/ prospective testing of MSNA-LAIC for M7+ in 2015 and 2:/ retrospective analysis of temporal-spatial variations in atmosphere and ionosphere several days before the two M7.8 and M7.3 in Nepal and M8.3 Chile earthquakes. During the prospective test 18 earthquakes M>7 occurred worldwide, from which 15 were alerted in advance with the time lag between 2 up to 30 days and with different level of accuracy. The retrospective analysis included different physical parameters from space: Outgoing long-wavelength radiation (OLR obtained from NPOES, NASA/AQUA) on the top of the atmosphere, Atmospheric potential (ACP obtained from NASA assimilation models) and electron density variations in the ionosphere via GPS Total Electron Content (GPS/TEC). Concerning M7.8 in Nepal of April 24, rapid increase of OLR reached the maximum on April 21-22. GPS/TEC data indicate maximum value during April 22-24 periods. Strong negative TEC anomaly was detected in the crest of EIA (Equatorial Ionospheric Anomaly) on April 21st and strong positive on April 24th, 2015. For May 12 M7.3 aftershock similar pre- earthquake patterns in OLR and GPS/TEC were observed. Concerning the M8.3 Chile of Sept 16, the OLR strongest transient feature was observed of Sept 12. GPS/TEC analysis data confirm abnormal values on Sept 14. Also on the same day the degradation of EIA and disappearance of the crests of EIA as is characteristic for pre-dawn and early morning hours (11 LT) was observed. On Sept 16 co-seismic ionospheric signatures consistent with defined circular acoustic-gravity wave and different shock-acoustic waves was also observed. The spatial characteristics of pre-earthquake transient behavior in atmosphere and ionosphere were associated with large area but inside the preparation region estimated by Dobrovolsky ratio. Our analysis of simultaneous space measurements associated with 2015 M>7 earthquakes suggest that they follow a general temporal-spatial evolution pattern, which has been seen in other large earthquakes worldwide

  3. Temporal selectivity by single neurons in the torus semicircularis of Batrachyla antartandica (Amphibia: Leptodactylidae).

    PubMed

    Penna, M; Lin, W Y; Feng, A S

    2001-12-01

    We investigated the response selectivities of single auditory neurons in the torus semicircularis of Batrachyla antartandica (a leptodactylid from southern Chile) to synthetic stimuli having diverse temporal structures. The advertisement call for this species is characterized by a long sequence of brief sound pulses having a dominant frequency of about 2000 Hz. We constructed five different series of synthetic stimuli in which the following acoustic parameters were systematically modified, one at a time: pulse rate, pulse duration, pulse rise time, pulse fall time, and train duration. The carrier frequency of these stimuli was fixed at the characteristic frequency of the units under study (n=44). Response patterns of TS units to these synthetic call variants revealed different degrees of selectivity for each of the temporal variables. A substantial number of neurons showed preference for pulse rates below 2 pulses s(-1), approximating the values found in natural advertisement calls. Tonic neurons generally showed preferences for long pulse durations, long rise and fall times, and long train durations. In contrast, phasic and phasic-burst neurons preferred stimuli with short duration, short rise and fall times and short train durations.

  4. Residency Patterns and Migration Dynamics of Adult Bull Sharks (Carcharhinus leucas) on the East Coast of Southern Africa

    PubMed Central

    Daly, Ryan; Smale, Malcolm J.; Cowley, Paul D.; Froneman, Pierre W.

    2014-01-01

    Bull sharks (Carcharhinus leucas) are globally distributed top predators that play an important ecological role within coastal marine communities. However, little is known about the spatial and temporal scales of their habitat use and associated ecological role. In this study, we employed passive acoustic telemetry to investigate the residency patterns and migration dynamics of 18 adult bull sharks (195–283 cm total length) tagged in southern Mozambique for a period of between 10 and 22 months. The majority of sharks (n = 16) exhibited temporally and spatially variable residency patterns interspersed with migration events. Ten individuals undertook coastal migrations that ranged between 433 and 709 km (mean  = 533 km) with eight of these sharks returning to the study site. During migration, individuals exhibited rates of movement between 2 and 59 km.d−1 (mean  = 17.58 km.d−1) and were recorded travelling annual distances of between 450 and 3760 km (mean  = 1163 km). Migration towards lower latitudes primarily took place in austral spring and winter and there was a significant negative correlation between residency and mean monthly sea temperature at the study site. This suggested that seasonal change is the primary driver behind migration events but further investigation is required to assess how foraging and reproductive activity may influence residency patterns and migration. Results from this study highlight the need for further understanding of bull shark migration dynamics and suggest that effective conservation strategies for this vulnerable species necessitate the incorporation of congruent trans-boundary policies over large spatial scales. PMID:25295972

  5. Active chiral control of GHz acoustic whispering-gallery modes

    NASA Astrophysics Data System (ADS)

    Mezil, Sylvain; Fujita, Kentaro; Otsuka, Paul H.; Tomoda, Motonobu; Clark, Matt; Wright, Oliver B.; Matsuda, Osamu

    2017-10-01

    We selectively generate chiral surface-acoustic whispering-gallery modes in the gigahertz range on a microscopic disk by means of an ultrafast time-domain technique incorporating a spatial light modulator. Active chiral control is achieved by making use of an optical pump spatial profile in the form of a semicircular arc, positioned on the sample to break the symmetry of clockwise- and counterclockwise-propagating modes. Spatiotemporal Fourier transforms of the interferometrically monitored two-dimensional acoustic fields measured to micron resolution allow individual chiral modes and their azimuthal mode order, both positive and negative, to be distinguished. In particular, for modes with 15-fold rotational symmetry, we demonstrate ultrafast chiral control of surface acoustic waves in a micro-acoustic system with picosecond temporal resolution. Applications include nondestructive testing and surface acoustic wave devices.

  6. Spontaneous tempo and rhythmic entrainment in a bonobo (Pan paniscus).

    PubMed

    Large, Edward W; Gray, Patricia M

    2015-11-01

    The emergence of speech and music in the human species represent major evolutionary transitions that enabled the use of complex, temporally structured acoustic signals to coordinate social interaction. While the fundamental capacity for temporal coordination with complex acoustic signals has been shown in a few distantly related species, the extent to which nonhuman primates exhibit sensitivity to auditory rhythms remains controversial. In Experiment 1, we assessed spontaneous motor tempo and tempo matching in a bonobo (Pan paniscus), in the context of a social drumming interaction. In Experiment 2, the bonobo spontaneously entrained and synchronized her drum strikes within a range around her spontaneous motor tempo. Our results are consistent with the hypothesis that the evolution of acoustic communication builds upon fundamental neurodynamic mechanisms that can be found in a wide range of species, and are recruited for social interactions. (c) 2015 APA, all rights reserved).

  7. Heard Island and McDonald Islands Acoustic Plumes: Split-beam Echo sounder and Deep Tow Camera Observations of Gas Seeps on the Central Kerguelen Plateau

    NASA Astrophysics Data System (ADS)

    Watson, S. J.; Spain, E. A.; Coffin, M. F.; Whittaker, J. M.; Fox, J. M.; Bowie, A. R.

    2016-12-01

    Heard and McDonald islands (HIMI) are two active volcanic edifices on the Central Kerguelen Plateau. Scientists aboard the Heard Earth-Ocean-Biosphere Interactions voyage in early 2016 explored how this volcanic activity manifests itself near HIMI. Using Simrad EK60 split-beam echo sounder and deep tow camera data from RV Investigator, we recorded the distribution of seafloor emissions, providing the first direct evidence of seabed discharge around HIMI, mapping >244 acoustic plume signals. Northeast of Heard, three distinct plume clusters are associated with bubbles (towed camera) and the largest directly overlies a sub-seafloor opaque zone (sub-bottom profiler) with >140 zones observed within 6.5 km. Large temperature anomalies did not characterize any of the acoustic plumes where temperature data were recorded. We therefore suggest that these plumes are cold methane seeps. Acoustic properties - mean volume backscattering and target strength - and morphology - height, width, depth to surface - of plumes around McDonald resembled those northeast of Heard, also suggesting gas bubbles. We observed no bubbles on extremely limited towed camera data around McDonald; however, visibility was poor. The acoustic response of the plumes at different frequencies (120 kHz vs. 18 kHz), a technique used to classify water column scatterers, differed between HIMI, suggestiing dissimilar target size (bubble radii) distributions. Environmental context and temporal characteristics of the plumes differed between HIMI. Heard plumes were concentrated on flat, sediment rich plains, whereas around McDonald plumes emanated from sea knolls and mounds with hard volcanic seafloor. The Heard plumes were consistent temporally, while the McDonald plumes varied temporally possibly related to tides or subsurface processes. Our data and analyses suggest that HIMI acoustic plumes were likely caused by gas bubbles; however, the bubbles may originate from two or more distinct processes.

  8. Dimensional analysis of acoustically propagated signals

    NASA Technical Reports Server (NTRS)

    Hansen, Scott D.; Thomson, Dennis W.

    1993-01-01

    Traditionally, long term measurements of atmospherically propagated sound signals have consisted of time series of multiminute averages. Only recently have continuous measurements with temporal resolution corresponding to turbulent time scales been available. With modern digital data acquisition systems we now have the capability to simultaneously record both acoustical and meteorological parameters with sufficient temporal resolution to allow us to examine in detail relationships between fluctuating sound and the meteorological variables, particularly wind and temperature, which locally determine the acoustic refractive index. The atmospheric acoustic propagation medium can be treated as a nonlinear dynamical system, a kind of signal processor whose innards depend on thermodynamic and turbulent processes in the atmosphere. The atmosphere is an inherently nonlinear dynamical system. In fact one simple model of atmospheric convection, the Lorenz system, may well be the most widely studied of all dynamical systems. In this paper we report some results of our having applied methods used to characterize nonlinear dynamical systems to study the characteristics of acoustical signals propagated through the atmosphere. For example, we investigate whether or not it is possible to parameterize signal fluctuations in terms of fractal dimensions. For time series one such parameter is the limit capacity dimension. Nicolis and Nicolis were among the first to use the kind of methods we have to study the properties of low dimension global attractors.

  9. Long-term noise statistics from the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Eller, Anthony I.; Ioup, George E.; Ioup, Juliette W.; Larue, James P.

    2003-04-01

    Long-term, omnidirectional acoustic noise measurements were conducted in the northeastern Gulf of Mexico during the summer of 2001. These efforts were a part of the Littoral Acoustic Demonstration Center project, Phase I. Initial looks at the noise time series, processed in standard one-third-octave bands from 10 to 5000 Hz, show noise levels that differ substantially from customary deep-water noise spectra. Contributing factors to this highly dynamic noise environment are an abundance of marine mammal emissions and various industrial noises. Results presented here address long-term temporal variability, temporal coherence times, the fluctuation spectrum, and coherence of fluctuations across the frequency spectrum. [Research supported by ONR.

  10. Enhancement of temporal periodicity cues in cochlear implants: Effects on prosodic perception and vowel identification

    NASA Astrophysics Data System (ADS)

    Green, Tim; Faulkner, Andrew; Rosen, Stuart; Macherey, Olivier

    2005-07-01

    Standard continuous interleaved sampling processing, and a modified processing strategy designed to enhance temporal cues to voice pitch, were compared on tests of intonation perception, and vowel perception, both in implant users and in acoustic simulations. In standard processing, 400 Hz low-pass envelopes modulated either pulse trains (implant users) or noise carriers (simulations). In the modified strategy, slow-rate envelope modulations, which convey dynamic spectral variation crucial for speech understanding, were extracted by low-pass filtering (32 Hz). In addition, during voiced speech, higher-rate temporal modulation in each channel was provided by 100% amplitude-modulation by a sawtooth-like wave form whose periodicity followed the fundamental frequency (F0) of the input. Channel levels were determined by the product of the lower- and higher-rate modulation components. Both in acoustic simulations and in implant users, the ability to use intonation information to identify sentences as question or statement was significantly better with modified processing. However, while there was no difference in vowel recognition in the acoustic simulation, implant users performed worse with modified processing both in vowel recognition and in formant frequency discrimination. It appears that, while enhancing pitch perception, modified processing harmed the transmission of spectral information.

  11. Temporal and spatial variation of beaked and sperm whales foraging activity in Hawai'i, as determined with passive acoustics.

    PubMed

    Giorli, Giacomo; Neuheimer, Anna; Copeland, Adrienne; Au, Whitlow W L

    2016-10-01

    Beaked and sperm whales are top predators living in the waters off the Kona coast of Hawai'i. Temporal and spatial analyses of the foraging activity of these two species were studied with passive acoustics techniques. Three passive acoustics recorders moored to the ocean floor were used to monitor the foraging activity of these whales in three locations along the Kona coast of the island of Hawaii. Data were analyzed using automatic detector/classification systems: M3R (Marine Mammal Monitoring on Navy Ranges), and custom-designed Matlab programs. The temporal variation in foraging activity was species-specific: beaked whales foraged more at night in the north, and more during the day-time off Kailua-Kona. No day-time/night-time preference was found in the southern end of the sampling range. Sperm whales foraged mainly at night in the north, but no day-time/night-time preference was observed off Kailua-Kona and in the south. A Generalized Linear Model was then applied to assess whether location and chlorophyll concentration affected the foraging activity of each species. Chlorophyll concentration and location influenced the foraging activity of both these species of deep-diving odontocetes.

  12. Temporal and acoustic characteristics of Greek vowels produced by adults with cerebral palsy

    NASA Astrophysics Data System (ADS)

    Botinis, Antonis; Orfanidou, Ioanna; Fourakis, Marios; Fourakis, Marios

    2005-09-01

    The present investigation examined the temporal and spectral characteristics of Greek vowels as produced by speakers with intact (NO) versus cerebral palsy affected (CP) neuromuscular systems. Six NO and six CP native speakers of Greek produced the Greek vowels [i, e, a, o, u] in the first syllable of CVCV nonsense words in a short carrier phrase. Stress could be on either the first or second syllable. There were three female and three male speakers in each group. In terms of temporal characteristics, the results showed that: vowels produced by CP speakers were longer than vowels produced by NO speakers; stressed vowels were longer than unstressed vowels; vowels produced by female speakers were longer than vowels produced by male speakers. In terms of spectral characteristics the results showed that the vowel space of the CP speakers was smaller than that of the NO speakers. This is similar to the results recently reported by Liu et al. [J. Acoust. Soc. Am. 117, 3879-3889 (2005)] for CP speakers of Mandarin. There was also a reduction of the acoustic vowel space defined by unstressed vowels, but this reduction was much more pronounced in the vowel productions of CP speakers than NO speakers.

  13. Signal interactions and interference in insect choruses: singing and listening in the social environment.

    PubMed

    Greenfield, Michael D

    2015-01-01

    Acoustic insects usually sing amidst conspecifics, thereby creating a social environment-the chorus-in which individuals communicate, find mates, and avoid predation. A temporal structure may arise in a chorus because of competitive and cooperative factors that favor certain signal interactions between neighbors. This temporal structure can generate significant acoustic interference among singers that pose problems for communication, mate finding, and predator detection. Acoustic insects can reduce interference by means of selective attention to only their nearest neighbors and by alternating calls with neighbors. Alternatively, they may synchronize, allowing them to preserve call rhythm and also to listen for predators during the silent intervals between calls. Moreover, males singing in choruses may benefit from reduced per capita predation risk as well as enhanced vigilance. They may also enjoy greater per capita attractiveness to females, particularly in the case of synchronous choruses. In many cases, however, the overall temporal structure of the chorus is only an emergent property of simple, pairwise interactions between neighbors. Nonetheless, the chorus that emerges can impose significant selection pressure on the singing of those individual males. Thus, feedback loops may occur and potentially influence traits at both individual and group levels in a chorus.

  14. Streaming and particle motion in acoustically-actuated leaky systems

    NASA Astrophysics Data System (ADS)

    Nama, Nitesh; Barnkob, Rune; Jun Huang, Tony; Kahler, Christian; Costanzo, Francesco

    2017-11-01

    The integration of acoustics with microfluidics has shown great promise for applications within biology, chemistry, and medicine. A commonly employed system to achieve this integration consists of a fluid-filled, polymer-walled microchannel that is acoustically actuated via standing surface acoustic waves. However, despite significant experimental advancements, the precise physical understanding of such systems remains a work in progress. In this work, we investigate the nature of acoustic fields that are setup inside the microchannel as well as the fundamental driving mechanism governing the fluid and particle motion in these systems. We provide an experimental benchmark using state-of-art 3D measurements of fluid and particle motion and present a Lagrangian velocity based temporal multiscale numerical framework to explain the experimental observations. Following verification and validation, we employ our numerical model to reveal the presence of a pseudo-standing acoustic wave that drives the acoustic streaming and particle motion in these systems.

  15. Temporal and spatial mapping of red grouper Epinephelus morio sound production.

    PubMed

    Wall, C C; Simard, P; Lindemuth, M; Lembke, C; Naar, D F; Hu, C; Barnes, B B; Muller-Karger, F E; Mann, D A

    2014-11-01

    The goals of this project were to determine the daily, seasonal and spatial patterns of red grouper Epinephelus morio sound production on the West Florida Shelf (WFS) using passive acoustics. An 11 month time series of acoustic data from fixed recorders deployed at a known E. morio aggregation site showed that E. morio produce sounds throughout the day and during all months of the year. Increased calling (number of files containing E. morio sound) was correlated to sunrise and sunset, and peaked in late summer (July and August) and early winter (November and December). Due to the ubiquitous production of sound, large-scale spatial mapping across the WFS of E. morio sound production was feasible using recordings from shorter duration-fixed location recorders and autonomous underwater vehicles (AUVs). Epinephelus morio were primarily recorded in waters 15-93 m deep, with increased sound production detected in hard bottom areas and within the Steamboat Lumps Marine Protected Area (Steamboat Lumps). AUV tracks through Steamboat Lumps, an offshore marine reserve where E. morio hole excavations have been previously mapped, showed that hydrophone-integrated AUVs could accurately map the location of soniferous fish over spatial scales of <1 km. The results show that passive acoustics is an effective, non-invasive tool to map the distribution of this species over large spatial scales. © 2014 The Fisheries Society of the British Isles.

  16. Responses of neurons in cat primary auditory cortex to bird chirps: effects of temporal and spectral context.

    PubMed

    Bar-Yosef, Omer; Rotman, Yaron; Nelken, Israel

    2002-10-01

    The responses of neurons to natural sounds and simplified natural sounds were recorded in the primary auditory cortex (AI) of halothane-anesthetized cats. Bird chirps were used as the base natural stimuli. They were first presented within the original acoustic context (at least 250 msec of sounds before and after each chirp). The first simplification step consisted of extracting a short segment containing just the chirp from the longer segment. For the second step, the chirp was cleaned of its accompanying background noise. Finally, each chirp was replaced by an artificial version that had approximately the same frequency trajectory but with constant amplitude. Neurons had a wide range of different response patterns to these stimuli, and many neurons had late response components in addition, or instead of, their onset responses. In general, every simplification step had a substantial influence on the responses. Neither the extracted chirp nor the clean chirp evoked a similar response to the chirp presented within its acoustic context. The extracted chirp evoked different responses than its clean version. The artificial chirps evoked stronger responses with a shorter latency than the corresponding clean chirp because of envelope differences. These results illustrate the sensitivity of neurons in AI to small perturbations of their acoustic input. In particular, they pose a challenge to models based on linear summation of energy within a spectrotemporal receptive field.

  17. Eavesdropping on insects hidden in soil and interior structures of plants.

    PubMed

    Mankin, R W; Brandhorst-Hubbard, J; Flanders, K L; Zhang, M; Crocker, R L; Lapointe, S L; McCoy, C W; Fisher, J R; Weaver, D K

    2000-08-01

    Accelerometer, electret microphone, and piezoelectric disk acoustic systems were evaluated for their potential to detect hidden insect infestations in soil and interior structures of plants. Coleopteran grubs (the scarabaeids Phyllophaga spp. and Cyclocephala spp.) and the curculionids Diaprepes abbreviatus (L.) and Otiorhynchus sulcatus (F.) weighing 50-300 mg were detected easily in the laboratory and in the field except under extremely windy or noisy conditions. Cephus cinctus Norton (Hymenoptera: Cephidae) larvae weighing 1-12 mg could be detected in small pots of wheat in the laboratory by taking moderate precautions to eliminate background noise. Insect sounds could be distinguished from background noises by differences in frequency and temporal patterns, but insects of similarly sized species could not be distinguished easily from each other. Insect activity was highly variable among individuals and species, although D. abbreviatus grubs tended to be more active than those of O. sulcatus. Tests were done to compare acoustically predicted infestations with the contents of soil samples taken at recording sites. Under laboratory or ideal field conditions, active insects within approximately 30 cm were identified with nearly 100% reliability. In field tests under adverse conditions, the reliability decreased to approximately 75%. These results indicate that acoustic systems with vibration sensors have considerable potential as activity monitors in the laboratory and as field tools for rapid, nondestructive scouting and mapping of soil insect populations.

  18. Mating Signals Indicating Sexual Receptiveness Induce Unique Spatio-Temporal EEG Theta Patterns in an Anuran Species

    PubMed Central

    Fang, Guangzhan; Yang, Ping; Cui, Jianguo; Yao, Dezhong; Brauth, Steven E.; Tang, Yezhong

    2012-01-01

    Female mate choice is of importance for individual fitness as well as a determining factor in genetic diversity and speciation. Nevertheless relatively little is known about how females process information acquired from males during mate selection. In the Emei music frog, Babina daunchina, males normally call from hidden burrows and females in the reproductive stage prefer male calls produced from inside burrows compared with ones from outside burrows. The present study evaluated changes in electroencephalogram (EEG) power output in four frequency bands induced by male courtship vocalizations on both sides of the telencephalon and mesencephalon in females. The results show that (1) both the values of left hemispheric theta relative power and global lateralization in the theta band are modulated by the sexual attractiveness of the acoustic stimulus in the reproductive stage, suggesting the theta oscillation is closely correlated with processing information associated with mate choice; (2) mean relative power in the beta band is significantly greater in the mesencephalon than the left telencephalon, regardless of reproductive status or the biological significance of signals, indicating it is associated with processing acoustic features and (3) relative power in the delta and alpha bands are not affected by reproductive status or acoustic stimuli. The results imply that EEG power in the theta and beta bands reflect different information processing mechanisms related to vocal recognition and auditory perception in anurans. PMID:23285010

  19. Bats coordinate sonar and flight behavior as they forage in open and cluttered environments.

    PubMed

    Falk, Benjamin; Jakobsen, Lasse; Surlykke, Annemarie; Moss, Cynthia F

    2014-12-15

    Echolocating bats use active sensing as they emit sounds and listen to the returning echoes to probe their environment for navigation, obstacle avoidance and pursuit of prey. The sensing behavior of bats includes the planning of 3D spatial trajectory paths, which are guided by echo information. In this study, we examined the relationship between active sonar sampling and flight motor output as bats changed environments from open space to an artificial forest in a laboratory flight room. Using high-speed video and audio recordings, we reconstructed and analyzed 3D flight trajectories, sonar beam aim and acoustic sonar emission patterns as the bats captured prey. We found that big brown bats adjusted their sonar call structure, temporal patterning and flight speed in response to environmental change. The sonar beam aim of the bats predicted the flight turn rate in both the open room and the forest. However, the relationship between sonar beam aim and turn rate changed in the forest during the final stage of prey pursuit, during which the bat made shallower turns. We found flight stereotypy developed over multiple days in the forest, but did not find evidence for a reduction in active sonar sampling with experience. The temporal patterning of sonar sound groups was related to path planning around obstacles in the forest. Together, these results contribute to our understanding of how bats coordinate echolocation and flight behavior to represent and navigate their environment. © 2014. Published by The Company of Biologists Ltd.

  20. Bats coordinate sonar and flight behavior as they forage in open and cluttered environments

    PubMed Central

    Falk, Benjamin; Jakobsen, Lasse; Surlykke, Annemarie; Moss, Cynthia F.

    2014-01-01

    Echolocating bats use active sensing as they emit sounds and listen to the returning echoes to probe their environment for navigation, obstacle avoidance and pursuit of prey. The sensing behavior of bats includes the planning of 3D spatial trajectory paths, which are guided by echo information. In this study, we examined the relationship between active sonar sampling and flight motor output as bats changed environments from open space to an artificial forest in a laboratory flight room. Using high-speed video and audio recordings, we reconstructed and analyzed 3D flight trajectories, sonar beam aim and acoustic sonar emission patterns as the bats captured prey. We found that big brown bats adjusted their sonar call structure, temporal patterning and flight speed in response to environmental change. The sonar beam aim of the bats predicted the flight turn rate in both the open room and the forest. However, the relationship between sonar beam aim and turn rate changed in the forest during the final stage of prey pursuit, during which the bat made shallower turns. We found flight stereotypy developed over multiple days in the forest, but did not find evidence for a reduction in active sonar sampling with experience. The temporal patterning of sonar sound groups was related to path planning around obstacles in the forest. Together, these results contribute to our understanding of how bats coordinate echolocation and flight behavior to represent and navigate their environment. PMID:25394632

  1. Impacts of short-time scale water column variability on broadband high-frequency acoustic wave propagation

    NASA Astrophysics Data System (ADS)

    Eickmeier, Justin

    Acoustical oceanography is one way to study the ocean, its internal layers, boundaries and all processes occurring within using underwater acoustics. Acoustical sensing techniques allows for the measurement of ocean processes from within that logistically or financially preclude traditional in-situ measurements. Acoustic signals propagate as pressure wavefronts from a source to a receiver through an ocean medium with variable physical parameters. The water column physical parameters that change acoustic wave propagation in the ocean include temperature, salinity, current, surface roughness, seafloor bathymetry, and vertical stratification over variable time scales. The impacts of short-time scale water column variability on acoustic wave propagation include coherent and incoherent surface reflections, wavefront arrival time delay, focusing or defocusing of the intensity of acoustic beams and refraction of acoustic rays. This study focuses on high-frequency broadband acoustic waves, and examines the influence of short-time scale water column variability on broadband high-frequency acoustics, wavefronts, from 7 to 28 kHz, in shallow water. Short-time scale variability is on the order of seconds to hours and the short-spatial scale variability is on the order of few centimeters. Experimental results were collected during an acoustic experiment along 100 m isobaths and data analysis was conducted using available acoustic wave propagation models. Three main topics are studied to show that acoustic waves are viable as a remote sensing tool to measure oceanographic parameters in shallow water. First, coherent surface reflections forming striation patterns, from multipath receptions, through rough surface interaction of broadband acoustic signals with the dynamic sea surface are analyzed. Matched filtered results of received acoustic waves are compared with a ray tracing numerical model using a sea surface boundary generated from measured water wave spectra at the time of signal propagation. It is determined that on a time scale of seconds, corresponding to typical periods of surface water waves, the arrival time of reflected acoustic signals from surface waves appear as striation patterns in measured data and can be accurately modelled by ray tracing. Second, changes in acoustic beam arrival angle and acoustic ray path influenced by isotherm depth oscillations are analyzed using an 8-element delay-sum beamformer. The results are compared with outputs from a two-dimensional (2-D) parabolic equation (PE) model using measured sound speed profiles (SSPs) in the water column. Using the method of beamforming on the received signal, the arrival time and angle of an acoustic beam was obtained for measured acoustic signals. It is determined that the acoustic ray path, acoustic beam intensity and angular spread are a function of vertical isotherm oscillations on a time scale of minutes and can be modeled accurately by a 2-D PE model. Third, a forward problem is introduced which uses acoustic wavefronts received on a vertical line array, 1.48 km from the source, in the lower part of the water column to infer range dependence or independence in the SSP. The matched filtering results of received acoustic wavefronts at all hydrophone depths are compared with a ray tracing routine augmented to calculate only direct path and bottom reflected signals. It is determined that the SSP range dependence can be inferred on a time scale of hours using an array of hydrophones spanning the water column. Sound speed profiles in the acoustic field were found to be range independent for 11 of the 23 hours in the measurements. A SSP cumulative reconstruction process, conducted from the seafloor to the sea surface, layer-by-layer, identifies critical segments in the SSP that define the ray path, arrival time and boundary interactions. Data-model comparison between matched filtered arrival time spread and arrival time output from the ray tracing was robust when the SSP measured at the receiver was input to the model. When the SSP measured nearest the source (at the same instant in time) was input to the ray tracing model, the data-model comparison was poor. It was determined that the cumulative sound speed change in the SSP near the source was 1.041 m/s greater than that of the SSP at the receiver and resulted in the poor data-model comparison. In this study, the influences on broadband acoustic wave propagation in the frequency range of 7 to 28 kHz of spatial and temporal changes in the oceanography of shallow water regions are addressed. Acoustic waves can be used as remote sensing tools to measure oceanographic parameters in shallow water and data-model comparison results show a direct relationship between the oceanographic variations and acoustic wave propagations.

  2. Biosonar navigation above water II: exploiting mirror images.

    PubMed

    Genzel, Daria; Hoffmann, Susanne; Prosch, Selina; Firzlaff, Uwe; Wiegrebe, Lutz

    2015-02-15

    As in vision, acoustic signals can be reflected by a smooth surface creating an acoustic mirror image. Water bodies represent the only naturally occurring horizontal and acoustically smooth surfaces. Echolocating bats flying over smooth water bodies encounter echo-acoustic mirror images of objects above the surface. Here, we combined an electrophysiological approach with a behavioral experimental paradigm to investigate whether bats can exploit echo-acoustic mirror images for navigation and how these mirrorlike echo-acoustic cues are encoded in their auditory cortex. In an obstacle-avoidance task where the obstacles could only be detected via their echo-acoustic mirror images, most bats spontaneously exploited these cues for navigation. Sonar ensonifications along the bats' flight path revealed conspicuous changes of the reflection patterns with slightly increased target strengths at relatively long echo delays corresponding to the longer acoustic paths from the mirrored obstacles. Recordings of cortical spatiotemporal response maps (STRMs) describe the tuning of a unit across the dimensions of elevation and time. The majority of cortical single and multiunits showed a special spatiotemporal pattern of excitatory areas in their STRM indicating a preference for echoes with (relative to the setup dimensions) long delays and, interestingly, from low elevations. This neural preference could effectively encode a reflection pattern as it would be perceived by an echolocating bat detecting an object mirrored from below. The current study provides both behavioral and neurophysiological evidence that echo-acoustic mirror images can be exploited by bats for obstacle avoidance. This capability effectively supports echo-acoustic navigation in highly cluttered natural habitats. Copyright © 2015 the American Physiological Society.

  3. Time-saving and fail-safe dissection method for vestibulocochlear organs in gross anatomy classes.

    PubMed

    Suzuki, Ryoji; Konno, Naoaki; Ishizawa, Akimitsu; Kanatsu, Yoshinori; Funakoshi, Kodai; Akashi, Hideo; Zhou, Ming; Abe, Hiroshi

    2017-09-01

    Because the vestibulocochlear organs are tiny and complex, and are covered by the petrous part of the temporal bone, they are very difficult for medical students to dissect and visualize during gross anatomy classes. Here, we report a time-saving and fail-safe procedure we have devised, using a hand-held hobby router. Nine en bloc temporal bone samples from donated human cadavers were used as trial materials for devising an appropriate procedure for dissecting the vestibulocochlear organs. A hand-held hobby router was used to cut through the temporal bone. After trials, the most time-saving and fail-safe method was selected. The performance of the selected method was assessed by a survey of 242 sides of 121 cadavers during gross anatomy classes for vestibulocochlear dissection. The assessment was based on the observation ratio. The best procedure appeared to be removal of the external acoustic meatus roof and tympanic cavity roof together with removal of the internal acoustic meatus roof. The whole procedure was completed within two dissection classes, each lasting 4.5 hr. The ratio of surveillance for the chorda tympani and three semicircular canals by students was significantly improved during 2013 through 2016. In our dissection class, "removal of the external acoustic meatus roof and tympanic cavity roof together with removal of the internal acoustic meatus roof" was the best procedure for students in the limited time available. Clin. Anat. 30:703-710, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Demonstration of a directional sonic prism in two dimensions using an air-acoustic leaky wave antenna

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naify, Christina J., E-mail: christina.naify@nrl.navy.mil; Rohde, Charles A.; Calvo, David C.

    Analysis and experimental demonstration of a two-dimensional acoustic leaky wave antenna is presented for use in air. The antenna is comprised of a two-dimensional waveguide patterned with radiating acoustic shunts. When excited using a single acoustic source within the waveguide, the antenna acts as a sonic prism that exhibits frequency steering. This design allows for control of acoustic steering angle using only a single source transducer and a patterned aperture. Aperture design was determined using transmission line analysis and finite element methods. The designed antenna was fabricated and the steering angle measured. The performance of the measured aperture was withinmore » 9% of predicted angle magnitudes over all examined frequencies.« less

  5. Propagation of noise over and through a forest stand

    Treesearch

    Lee P. Herrington; C. Brock

    1977-01-01

    Measurements of the two-dimensional acoustic field in a forest resulting from a source located outside the forest indicated that the attenuation pattern near the ground is significantly different from the pattern higher up in the forest. The patterns of attenuation support the recent theory that the forest floor is the main absorber of acoustic energy in the forest....

  6. Associations between tongue movement pattern consistency and formant movement pattern consistency in response to speech behavioral modificationsa)

    PubMed Central

    Mefferd, Antje S.

    2016-01-01

    The degree of speech movement pattern consistency can provide information about speech motor control. Although tongue motor control is particularly important because of the tongue's primary contribution to the speech acoustic signal, capturing tongue movements during speech remains difficult and costly. This study sought to determine if formant movements could be used to estimate tongue movement pattern consistency indirectly. Two age groups (seven young adults and seven older adults) and six speech conditions (typical, slow, loud, clear, fast, bite block speech) were selected to elicit an age- and task-dependent performance range in tongue movement pattern consistency. Kinematic and acoustic spatiotemporal indexes (STI) were calculated based on sentence-length tongue movement and formant movement signals, respectively. Kinematic and acoustic STI values showed strong associations across talkers and moderate to strong associations for each talker across speech tasks; although, in cases where task-related tongue motor performance changes were relatively small, the acoustic STI values were poorly associated with kinematic STI values. These findings suggest that, depending on the sensitivity needs, formant movement pattern consistency could be used in lieu of direct kinematic analysis to indirectly examine speech motor control. PMID:27908069

  7. Contrast-enhanced optical coherence microangiography with acoustic-actuated microbubbles

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Hsuan; Zhang, Jia-Wei; Yeh, Chih-Kuang; Wei, Kuo-Chen; Liu, Hao-Li; Tsai, Meng-Tsan

    2017-04-01

    In this study, we propose to use gas-filled microbubbles (MBs) simultaneously actuated by the acoustic wave to enhance the imaging contrast of optical coherence tomography (OCT)-based angiography. In the phantom experiments, MBs can result in stronger backscattered intensity, enabling to enhance the contrast of OCT intensity image. Moreover, simultaneous application of low-intensity acoustic wave enables to temporally induce local vibration of particles and MBs in the vessels, resulting in time-variant OCT intensity which can be used for enhancing the contrast of OCT intensitybased angiography. Additionally, different acoustic modes and different acoustic powers to actuate MBs are performed and compared to investigate the feasibility of contrast enhancement. Finally, animal experiments are performed. The findings suggest that acoustic-actuated MBs can effectively enhance the imaging contrast of OCT-based angiography and the imaging depth of OCT angiography is also extended.

  8. Temporal organization of an anuran acoustic community in a Taiwanese subtropical forest

    USGS Publications Warehouse

    Hsu, M.-Y.; Kam, Y.-C.; Fellers, G.M.

    2006-01-01

    We recorded anuran vocalizations in each of four habitats at Lien Hua Chih Field Station, Taiwan, between July 2000 and July 2001. For each 27 biweekly sample, eight recorders taped calls for 1 min out of every 11 between the hours of 17:00 and 07:00. We obtained 11 481 recordings with calls, and identified 21 503 frogs or groups of frogs. These included 20 species, with an average of 10.4??3.5 species calling each night. Some species called year round, others called in the spring and summer, and a third group called only in the fall and winter. The number of species calling and the maximum calling intensity were correlated with both rainfall and air temperature. The nightly pattern of calling varied among species. Most species called continuously throughout the night, whereas some had a peak right after dusk. A few species had different nightly calling patterns in different habitats. Both Rana limnocharis and Rana kuhlii changed their calling pattern in the presence of large choruses of other anuran species. ?? 2006 The Authors.

  9. Diel and seasonal movement pattern of the dusky grouper Epinephelus marginatus inside a marine reserve.

    PubMed

    Koeck, Barbara; Pastor, Jérémy; Saragoni, Gilles; Dalias, Nicolas; Payrot, Jérôme; Lenfant, Philippe

    2014-03-01

    Temporal movement patterns and spawning behaviour of the dusky grouper Epinephelus marginatus were investigated using depth and temperature sensors combined to acoustic telemetry. Results showed that these fish are year-round resident, remaining inside the fully protected area of the marine reserve of Cerbère-Banyuls (65 ha) and display a diurnal activity pattern. Records from depth sensors revealed that groupers range inside small, distinct, and individual territories. Individual variations in habitat depth are only visible on a seasonal scale, i.e., between the spawning season and the rest of the year. In fact, during summer months when the seawater temperature exceeded 20 °C, tagged groupers made vertical spawning migrations of 4-8 m in amplitude. These vertical migrations are characteristic of the reproductive behaviour of dusky groupers, during which they release their gametes. The results are notable for the implementation of management rules in marine protected areas, such as reduced navigation speed, boating or attendance during spawning season. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Near-Term Fetuses Process Temporal Features of Speech

    ERIC Educational Resources Information Center

    Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie

    2011-01-01

    The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…

  11. Acoustical Survey of Methane Plumes on North Hydrate Ridge: Constraining Temporal and Spatial Characteristics.

    NASA Astrophysics Data System (ADS)

    Kannberg, P. K.; Trehu, A. M.

    2008-12-01

    While methane plumes associated with hydrate formations have been acoustically imaged before, little is known about their temporal characteristics. Previous acoustic surveys have focused on determining plume location, but as far as we know, multiple, repeated surveys of the same plume have not been done prior to the survey presented here. In July 2008, we acquired sixteen identical surveys within 19 hours over the northern summit of Hydrate Ridge in the Cascadia accretionary complex using the onboard 3.5 and 12 kHz echosounders. As in previous studies, the plumes were invisible to the 3.5 kHz echosounder and clearly imaged with 12 kHz. Seafloor depth in this region is ~600 m. Three distinct plumes were detected close to where plumes were located by Heeschen et al. (2003) a decade ago. Two of the plumes disappeared at ~520 m water depth, which is the depth of the top of the gas hydrate stability as determined from CTD casts obtained during the cruise. This supports the conclusion of Heeschen et al. (2003) that the bubbles are armored by gas hydrate and that they dissolve in the water column when they leave the hydrate stability zone. One of the plumes near the northern summit, however, extended through this boundary to at least 400 m (the shallowest depth recorded). A similar phenomenon was observed in methane plumes in the Gulf of Mexico, where the methane was found to be armored by an oil skin. In addition to the steady plumes, two discrete "burps" were observed. One "burp" occurred approximately 600 m to the SSW of the northern summit. This was followed by a second strong event 300m to the north an hour later. To evaluate temporal and spatial patterns, we summed the power of the backscattered signal in different depth windows for each survey. We present the results as a movie in which the backscatter power is shown in map view as a function of time. The surveys encompassed two complete tidal cycles, but no correlation between plume location or intensity and tides is apparent in the data. Additional analyses will constrain plume strength as a function of water depth. Heeschen et al., GRL, v. 30, 2003.

  12. Tracking speech comprehension in space and time.

    PubMed

    Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D

    2006-07-01

    A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.

  13. Memory traces for spoken words in the brain as revealed by the hemodynamic correlate of the mismatch negativity.

    PubMed

    Shtyrov, Yury; Osswald, Katja; Pulvermüller, Friedemann

    2008-01-01

    The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level-dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.

  14. Discriminating between auditory and motor cortical responses to speech and non-speech mouth sounds

    PubMed Central

    Agnew, Z.K.; McGettigan, C.; Scott, S.K.

    2012-01-01

    Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds, or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and non-speech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: anterior temporal lobe areas are sensitive to the acoustic/phonetic properties while motor responses may show more generalised responses to the acoustic stimuli. PMID:21812557

  15. In Situ Measurement of Sediment Properties and Relationship to Backscatter: An Example From the ONR Mine Burial Program, Martha's Vineyard Coastal Observatory

    NASA Astrophysics Data System (ADS)

    Kraft, B. J.; Mayer, L. A.; Simpkin, P.; Goff, J. A.; Schwab, B.; Jenkins, C.

    2002-12-01

    In support of the Office of Naval Research­Ýs Mine Burial Program (MBP), in situ acoustic and resistivity measurements were obtained using ISSAP (In situ Sound Speed and Attenuation Probe), a device developed and built by the Center for Coastal and Ocean Mapping. One of the field areas selected for the MBP experiments is the WHOI coastal observatory based off Martha's Vineyard. This area is an active natural laboratory that will provide an ideal environment for testing and observing mine migration and burial patterns due to temporal seabed processes. Seawater and surficial sediment measurements of compressional wave sound speed, attenuation, and resistivity were obtained at 87 station locations. ISSAP used four transducer probes that were arranged in a square pattern giving approximate acoustic path lengths of 30 cm and 20 cm and a maximum insertion depth of 15 cm. The transducers operated at a frequency of 65 kHz. Five acoustic paths were used; two long paths and three short paths. A ~15.4 ŸYs pulse was generated at a repetition rate of 30 Hz. The received signal was combined with the transmitter gate pulse to generate a composite signal that was sampled at a frequency of 5 MHz with a National Instruments PCI-6110E data acquisition board. Two resistivity probes were mounted on the ISSAP platform and positioned in locations selected to limit interference with the acoustic signals. Also mounted on the platform were a color video camera and light, and a Jasco Research UWINSTRU, which measured platform pitch and roll angles, heading, depth, and temperature. At each of the 87 stations, the ISSAP probe was lowered into seawater to a location ~6m above the seafloor. A measurement cycle was completed by transmitting 10 pulses on each of the five paths and repeating three times for a total of 150 measurements. Resistivity measurements were obtained from both probes following completion of the acoustic measurements. The ISSAP platform was then lowered into the seafloor where two acoustic and resistivity measurement cycles were completed in the sediment. Probe insertion was aided by the video signal which provided imagery of the seafloor. The instrument was removed from the sediment and a second seawater measurement cycle completed. Typically, a sequence of measurements (300 acoustic and 40 resistivity measurements in seawater and similarly in sediment) was completed in ~ 4 minutes. Recorded waveforms were processed for sound speed using two methods, cross-correlation and envelope detection. Sediment attenuation was estimated using the filter-correlation method of Courtney and Mayer. In conjunction with the MBP experiments, several surveys (sidescan, interferometric bathymetry, and multibeam) have been completed. The ability to predict quantitative acoustical and physical properties of sediments from remotely measured backscatter data will be examined.

  16. Science Enabled by Ocean Observatory Acoustics

    NASA Astrophysics Data System (ADS)

    Howe, B. M.; Lee, C.; Gobat, J.; Freitag, L.; Miller, J. H.; Committee, I.

    2004-12-01

    Ocean observatories have the potential to examine the physical, chemical, biological, and geological parameters and processes of the ocean at time and space scales previously unexplored. Acoustics provides an efficient and cost-effective means by which these parameters and processes can be measured and information can be communicated. Integrated acoustics systems providing navigation and communications for mobile platforms and conducting acoustical measurements in support of science objectives are critical and essential elements of the ocean observatories presently in the planning and implementation stages. The ORION Workshop (Puerto Rico, 4-8 January 2004) developed science themes that can be addressed utilizing ocean observatory infrastructure. The use of acoustics to sense the 3-d/volumetric ocean environment on all temporal and spatial scales was discussed in many ORION working groups. Science themes that are related to acoustics and measurements using acoustics are reviewed and tabulated, as are the related and sometimes competing requirements for passive listening, acoustic navigation and acoustic communication around observatories. Sound in the sea, brought from observatories to universities and schools via the internet, will also be a major education and outreach mechanism.

  17. The Effect of Habitat Acoustics on Common Marmoset Vocal Signal Transmission

    PubMed Central

    MORRILL, RYAN J.; THOMAS, A. WREN; SCHIEL, NICOLA; SOUTO, ANTONIO; MILLER, CORY T.

    2013-01-01

    Noisy acoustic environments present several challenges for the evolution of acoustic communication systems. Among the most significant is the need to limit degradation of spectro-temporal signal structure in order to maintain communicative efficacy. This can be achieved by selecting for several potentially complementary processes. Selection can act on behavioral mechanisms permitting signalers to control the timing and occurrence of signal production to avoid acoustic interference. Likewise, the signal itself may be the target of selection, biasing the evolution of its structure to comprise acoustic features that avoid interference from ambient noise or degrade minimally in the habitat. Here, we address the latter topic for common marmoset (Callithrix jacchus) long-distance contact vocalizations, known as phee calls. Our aim was to test whether this vocalization is specifically adapted for transmission in a species-typical forest habitat, the Atlantic forests of northeastern Brazil. We combined seasonal analyses of ambient habitat acoustics with experiments in which pure tones, clicks, and vocalizations were broadcast and rerecorded at different distances to characterize signal degradation in the habitat. Ambient sound was analyzed from intervals throughout the day and over rainy and dry seasons, showing temporal regularities across varied timescales. Broadcast experiment results indicated that the tone and click stimuli showed the typically inverse relationship between frequency and signaling efficacy. Although marmoset phee calls degraded over distance with marked predictability compared with artificial sounds, they did not otherwise appear to be specially designed for increased transmission efficacy or minimal interference in this habitat. We discuss these data in the context of other similar studies and evidence of potential behavioral mechanisms for avoiding acoustic interference in order to maintain effective vocal communication in common marmosets. PMID:23592313

  18. The effect of habitat acoustics on common marmoset vocal signal transmission.

    PubMed

    Morrill, Ryan J; Thomas, A Wren; Schiel, Nicola; Souto, Antonio; Miller, Cory T

    2013-09-01

    Noisy acoustic environments present several challenges for the evolution of acoustic communication systems. Among the most significant is the need to limit degradation of spectro-temporal signal structure in order to maintain communicative efficacy. This can be achieved by selecting for several potentially complementary processes. Selection can act on behavioral mechanisms permitting signalers to control the timing and occurrence of signal production to avoid acoustic interference. Likewise, the signal itself may be the target of selection, biasing the evolution of its structure to comprise acoustic features that avoid interference from ambient noise or degrade minimally in the habitat. Here, we address the latter topic for common marmoset (Callithrix jacchus) long-distance contact vocalizations, known as phee calls. Our aim was to test whether this vocalization is specifically adapted for transmission in a species-typical forest habitat, the Atlantic forests of northeastern Brazil. We combined seasonal analyses of ambient habitat acoustics with experiments in which pure tones, clicks, and vocalizations were broadcast and rerecorded at different distances to characterize signal degradation in the habitat. Ambient sound was analyzed from intervals throughout the day and over rainy and dry seasons, showing temporal regularities across varied timescales. Broadcast experiment results indicated that the tone and click stimuli showed the typically inverse relationship between frequency and signaling efficacy. Although marmoset phee calls degraded over distance with marked predictability compared with artificial sounds, they did not otherwise appear to be specially designed for increased transmission efficacy or minimal interference in this habitat. We discuss these data in the context of other similar studies and evidence of potential behavioral mechanisms for avoiding acoustic interference in order to maintain effective vocal communication in common marmosets. © 2013 Wiley Periodicals, Inc.

  19. Directional and dynamic modulation of the optical emission of an individual GaAs nanowire using surface acoustic waves.

    PubMed

    Kinzel, Jörg B; Rudolph, Daniel; Bichler, Max; Abstreiter, Gerhard; Finley, Jonathan J; Koblmüller, Gregor; Wixforth, Achim; Krenner, Hubert J

    2011-04-13

    We report on optical experiments performed on individual GaAs nanowires and the manipulation of their temporal emission characteristics using a surface acoustic wave. We find a pronounced, characteristic suppression of the emission intensity for the surface acoustic wave propagation aligned with the axis of the nanowire. Furthermore, we demonstrate that this quenching is dynamical as it shows a pronounced modulation as the local phase of the surface acoustic wave is tuned. These effects are strongly reduced for a surface acoustic wave applied in the direction perpendicular to the axis of the nanowire due to their inherent one-dimensional geometry. We resolve a fully dynamic modulation of the nanowire emission up to 678 MHz not limited by the physical properties of the nanowires.

  20. Basilar-membrane interference patterns from multiple internal reflection of cochlear traveling waves.

    PubMed

    Shera, Christopher A; Cooper, Nigel P

    2013-04-01

    At low stimulus levels, basilar-membrane (BM) mechanical transfer functions in sensitive cochleae manifest a quasiperiodic rippling pattern in both amplitude and phase. Analysis of the responses of active cochlear models suggests that the rippling is a mechanical interference pattern created by multiple internal reflection within the cochlea. In models, the interference arises when reverse-traveling waves responsible for stimulus-frequency otoacoustic emissions (SFOAEs) reflect off the stapes on their way to the ear canal, launching a secondary forward-traveling wave that combines with the primary wave produced by the stimulus. Frequency-dependent phase differences between the two waves then create the rippling pattern measurable on the BM. Measurements of BM ripples and SFOAEs in individual chinchilla ears demonstrate that the ripples are strongly correlated with the acoustic interference pattern measured in ear-canal pressure, consistent with a common origin involving the generation of SFOAEs. In BM responses to clicks, the ripples appear as temporal fine structure in the response envelope (multiple lobes, waxing and waning). Analysis of the ripple spacing and response phase gradients provides a test for the role of fast- and slow-wave modes of reverse energy propagation within the cochlea. The data indicate that SFOAE delays are consistent with reverse slow-wave propagation but much too long to be explained by fast waves.

  1. Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

    PubMed

    Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z

    2018-05-15

    Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Vertical velocity structure and geometry of clear air convective elements

    NASA Technical Reports Server (NTRS)

    Rowland, J. R.; Arnold, A.

    1975-01-01

    The paper discusses observations of individual convective elements with a high-power narrow-beam scanning radar, an FM-CW radar, and an acoustic sounder, including the determination of the vertical air velocity patterns of convective structures with the FM-CW radar and acoustic sounder. Data are presented which link the observed velocity structure and geometrical patterns to previously proposed models of boundary layer convection. It is shown that the high-power radar provides a clear three-dimensional picture of convective cells and fields over a large area with a resolution of 150 m, where the convective cells are roughly spherical. Analysis of time-height records of the FM-CW radar and acoustic sounder confirms the downdraft-entrainment mechanism of the convective cell. The Doppler return of the acoustic sounder and the insect-trail slopes on FM-CW radar records are independent but redundant methods for obtaining the vertical velocity patterns of convective structures.

  3. Turbomachinery noise studies of the AiResearch QCGAT engine with inflow control

    NASA Technical Reports Server (NTRS)

    Mcardle, J. G.; Homyak, L.; Chrulski, D. D.

    1981-01-01

    The AiResearch Quiet Clean General Aviation Turbofan engine was tested on an outdoor test stand to compare the acoustic performance of two inflow control devices (ICD's) of similar design, and three inlet lips of different external shape. Only small performance differences were found. Far-field directivity patterns calculated by applicable existing analyses were compared with the measured tone and broadband patterns. For some of these comparisons, tests were made with an ICD to reduce rotor/inflow disturbance interaction noise, or with the acoustic suppression panels in the inlet or bypass duct covered with aluminum tape to determine hard wall acoustic performance. The comparisons showed that the analytical expressions used predict many directivity pattern features and trends, but can deviate in shape from the measured patterns under certain engine operating conditions. Some patterns showed lobes from modes attributable to rotor/engine strut interaction sources.

  4. Spatial/Temporal Variations of Crime: A Routine Activity Theory Perspective.

    PubMed

    de Melo, Silas Nogueira; Pereira, Débora V S; Andresen, Martin A; Matias, Lindon Fonseca

    2018-05-01

    Temporal and spatial patterns of crime in Campinas, Brazil, are analyzed considering the relevance of routine activity theory in a Latin American context. We use geo-referenced criminal event data, 2010-2013, analyzing spatial patterns using census tracts and temporal patterns considering seasons, months, days, and hours. Our analyses include difference in means tests, count-based regression models, and Kulldorff's scan test. We find that crime in Campinas, Brazil, exhibits both temporal and spatial-temporal patterns. However, the presence of these patterns at the different temporal scales varies by crime type. Specifically, not all crime types have statistically significant temporal patterns at all scales of analysis. As such, routine activity theory works well to explain temporal and spatial-temporal patterns of crime in Campinas, Brazil. However, local knowledge of Brazilian culture is necessary for understanding a portion of these crime patterns.

  5. Single-click beam patterns suggest dynamic changes to the field of view of echolocating Atlantic spotted dolphins (Stenella frontalis) in the wild.

    PubMed

    Jensen, Frants H; Wahlberg, Magnus; Beedholm, Kristian; Johnson, Mark; de Soto, Natacha Aguilar; Madsen, Peter T

    2015-05-01

    Echolocating animals exercise an extensive control over the spectral and temporal properties of their biosonar signals to facilitate perception of their actively generated auditory scene when homing in on prey. The intensity and directionality of the biosonar beam defines the field of view of echolocating animals by affecting the acoustic detection range and angular coverage. However, the spatial relationship between an echolocating predator and its prey changes rapidly, resulting in different biosonar requirements throughout prey pursuit and capture. Here, we measured single-click beam patterns using a parametric fit procedure to test whether free-ranging Atlantic spotted dolphins (Stenella frontalis) modify their biosonar beam width. We recorded echolocation clicks using a linear array of receivers and estimated the beam width of individual clicks using a parametric spectral fit, cross-validated with well-established composite beam pattern estimates. The dolphins apparently increased the biosonar beam width, to a large degree without changing the signal frequency, when they approached the recording array. This is comparable to bats that also expand their field of view during prey capture, but achieve this by decreasing biosonar frequency. This behaviour may serve to decrease the risk that rapid escape movements of prey take them outside the biosonar beam of the predator. It is likely that shared sensory requirements have resulted in bats and toothed whales expanding their acoustic field of view at close range to increase the likelihood of successfully acquiring prey using echolocation, representing a case of convergent evolution of echolocation behaviour between these two taxa. © 2015. Published by The Company of Biologists Ltd.

  6. Tunable damper for an acoustic wave guide

    DOEpatents

    Rogers, Samuel C.

    1984-01-01

    A damper for tunably damping acoustic waves in an ultrasonic waveguide is provided which may be used in a hostile environment such as a nuclear reactor. The area of the waveguide, which may be a selected size metal rod in which acoustic waves are to be damped, is wrapped, or surrounded, by a mass of stainless steel wool. The wool wrapped portion is then sandwiched between tuning plates, which may also be stainless steel, by means of clamping screws which may be adjusted to change the clamping force of the sandwiched assembly along the waveguide section. The plates are preformed along their length in a sinusoidally bent pattern with a period approximately equal to the acoustic wavelength which is to be damped. The bent pattern of the opposing plates are in phase along their length relative to their sinusoidal patterns so that as the clamping screws are tightened a bending stress is applied to the waveguide at 180.degree. intervals along the damping section to oppose the acoustic wave motions in the waveguide and provide good coupling of the wool to the guide. The damper is tuned by selectively tightening the clamping screws while monitoring the amplitude of the acoustic waves launched in the waveguide. It may be selectively tuned to damp particular acoustic wave modes (torsional or extensional, for example) and/or frequencies while allowing others to pass unattenuated.

  7. Tunable damper for an acoustic wave guide

    DOEpatents

    Rogers, S.C.

    1982-10-21

    A damper for tunably damping acoustic waves in an ultrasonic waveguide is provided which may be used in a hostile environment such as a nuclear reactor. The area of the waveguide, which may be a selected size metal rod in which acoustic waves are to be damped, is wrapped, or surrounded, by a mass of stainless steel wool. The wool wrapped portion is then sandwiched between tuning plates, which may also be stainless steel, by means of clamping screws which may be adjusted to change the clamping force of the sandwiched assembly along the waveguide section. The plates are preformed along their length in a sinusoidally bent pattern with a period approximately equal to the acoustic wavelength which is to be damped. The bent pattern of the opposing plates are in phase along their length relative to their sinusoidal patterns so that as the clamping screws are tightened a bending stress is applied to the waveguide at 180/sup 0/ intervals along the damping section to oppose the acoustic wave motions in the waveguide and provide good coupling of the wool to the guide. The damper is tuned by selectively tightening the clamping screws while monitoring the amplitude of the acoustic waves launched in the waveguide. It may be selectively tuned to damp particular acoustic wave modes (torsional or extensional, for example) and/or frequencies while allowing others to pass unattenuated.

  8. Levitation of objects using acoustic energy

    NASA Technical Reports Server (NTRS)

    Whymark, R. R.

    1975-01-01

    Activated sound source establishes standing-wave pattern in gap between source and acoustic reflector. Solid or liquid material introduced in region will move to one of the low pressure areas produced at antinodes and remain suspended as long as acoustic signal is present.

  9. Broadscale postseismic gravity change following the 2011 Tohoku-Oki earthquake and implication for deformation by viscoelastic relaxation and afterslip.

    PubMed

    Han, Shin-Chan; Sauber, Jeanne; Pollitz, Fred

    2014-08-28

    The analysis of GRACE gravity data revealed postseismic gravity increase by 6 μGal over a 500 km scale within a couple of years after the 2011 Tohoku-Oki earthquake, which is nearly 40-50% of the coseismic gravity change. It originates mostly from changes in the isotropic component corresponding to the M rr moment tensor element. The exponential decay with rapid change in a year and gradual change afterward is a characteristic temporal pattern. Both viscoelastic relaxation and afterslip models produce reasonable agreement with the GRACE free-air gravity observation, while their Bouguer gravity patterns and seafloor vertical deformations are distinctly different. The postseismic gravity variation is best modeled by the biviscous relaxation with a transient and steady state viscosity of 10 18 and 10 19  Pa s, respectively, for the asthenosphere. Our calculated higher-resolution viscoelastic relaxation model, underlying the partially ruptured elastic lithosphere, yields the localized postseismic subsidence above the hypocenter reported from the GPS-acoustic seafloor surveying.

  10. Broadscale postseismic gravity change following the 2011 Tohoku-Oki earthquake and implication for deformation by viscoelastic relaxation and afterslip

    PubMed Central

    Han, Shin-Chan; Sauber, Jeanne; Pollitz, Fred

    2014-01-01

    The analysis of GRACE gravity data revealed postseismic gravity increase by 6 μGal over a 500 km scale within a couple of years after the 2011 Tohoku-Oki earthquake, which is nearly 40–50% of the coseismic gravity change. It originates mostly from changes in the isotropic component corresponding to the Mrr moment tensor element. The exponential decay with rapid change in a year and gradual change afterward is a characteristic temporal pattern. Both viscoelastic relaxation and afterslip models produce reasonable agreement with the GRACE free-air gravity observation, while their Bouguer gravity patterns and seafloor vertical deformations are distinctly different. The postseismic gravity variation is best modeled by the biviscous relaxation with a transient and steady state viscosity of 1018 and 1019 Pa s, respectively, for the asthenosphere. Our calculated higher-resolution viscoelastic relaxation model, underlying the partially ruptured elastic lithosphere, yields the localized postseismic subsidence above the hypocenter reported from the GPS-acoustic seafloor surveying. PMID:25821272

  11. A MEMS Condenser Microphone-Based Intracochlear Acoustic Receiver.

    PubMed

    Pfiffner, Flurin; Prochazka, Lukas; Peus, Dominik; Dobrev, Ivo; Dalbert, Adrian; Sim, Jae Hoon; Kesterke, Rahel; Walraevens, Joris; Harris, Francesca; Roosli, Christof; Obrist, Dominik; Huber, Alexander

    2017-10-01

    Intracochlear sound pressure (ICSP) measurements are limited by the small dimensions of the human inner ear and the requirements imposed by the liquid medium. A robust intracochlear acoustic receiver (ICAR) for repeated use with a simple data acquisition system that provides the required high sensitivity and small dimensions does not yet exist. The work described in this report aims to fill this gap and presents a new microelectromechanical systems (MEMS) condenser microphone (CMIC)-based ICAR concept suitable for ICSP measurements in human temporal bones. The ICAR head consisted of a passive protective diaphragm (PD) sealing the MEMS CMIC against the liquid medium, enabling insertion into the inner ear. The components of the MEMS CMIC-based ICAR were expressed by a lumped element model (LEM) and compared to the performance of successfully fabricated ICARs. Good agreement was achieved between the LEM and the measurements with different sizes of the PD. The ICSP measurements in a human cadaver temporal bone yielded data in agreement with the literature. Our results confirm that the presented MEMS CMIC-based ICAR is a promising technology for measuring ICSP in human temporal bones in the audible frequency range. A sensor for evaluation of the biomechanical hearing process by quantification of ICSP is presented. The concept has potential as an acoustic receiver in totally implantable cochlear implants.

  12. Comparison of temporal and spectral scattering methods using acoustically large breast models derived from magnetic resonance images.

    PubMed

    Hesford, Andrew J; Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C

    2014-08-01

    Accurate and efficient modeling of ultrasound propagation through realistic tissue models is important to many aspects of clinical ultrasound imaging. Simplified problems with known solutions are often used to study and validate numerical methods. Greater confidence in a time-domain k-space method and a frequency-domain fast multipole method is established in this paper by analyzing results for realistic models of the human breast. Models of breast tissue were produced by segmenting magnetic resonance images of ex vivo specimens into seven distinct tissue types. After confirming with histologic analysis by pathologists that the model structures mimicked in vivo breast, the tissue types were mapped to variations in sound speed and acoustic absorption. Calculations of acoustic scattering by the resulting model were performed on massively parallel supercomputer clusters using parallel implementations of the k-space method and the fast multipole method. The efficient use of these resources was confirmed by parallel efficiency and scalability studies using large-scale, realistic tissue models. Comparisons between the temporal and spectral results were performed in representative planes by Fourier transforming the temporal results. An RMS field error less than 3% throughout the model volume confirms the accuracy of the methods for modeling ultrasound propagation through human breast.

  13. Ionospheric acoustic and gravity wave activity above low-latitude thunderstorms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lay, Erin Hoffmann

    In this report, we study the correlation between thunderstorm activity and ionospheric gravity and acoustic waves in the low-latitude ionosphere. We use ionospheric total electron content (TEC) measurements from the Low Latitude Ionospheric Sensor Network (LISN) and lightning measurements from the World- Wide Lightning Location Network (WWLLN). We find that ionospheric acoustic waves show a strong diurnal pattern in summer, peaking in the pre-midnight time period. However, the peak magnitude does not correspond to thunderstorm area, and the peak time is significantly after the peak in thunderstorm activity. Wintertime acoustic wave activity has no discernable pattern in these data. Themore » coverage area of ionospheric gravity waves in the summer was found to increase with increasing thunderstorm activity. Wintertime gravity wave activity has an observable diurnal pattern unrelated to thunderstorm activity. These findings show that while thunderstorms are not the only, or dominant source of ionospheric perturbations at low-latitudes, they do have an observable effect on gravity wave activity and could be influential in acoustic wave activity.« less

  14. Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data.

    PubMed

    Gow, David W; Segawa, Jennifer A

    2009-02-01

    The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.

  15. Dual-Pitch Processing Mechanisms in Primate Auditory Cortex

    PubMed Central

    Bendor, Daniel; Osmanski, Michael S.

    2012-01-01

    Pitch, our perception of how high or low a sound is on a musical scale, is a fundamental perceptual attribute of sounds and is important for both music and speech. After more than a century of research, the exact mechanisms used by the auditory system to extract pitch are still being debated. Theoretically, pitch can be computed using either spectral or temporal acoustic features of a sound. We have investigated how cues derived from the temporal envelope and spectrum of an acoustic signal are used for pitch extraction in the common marmoset (Callithrix jacchus), a vocal primate species, by measuring pitch discrimination behaviorally and examining pitch-selective neuronal responses in auditory cortex. We find that pitch is extracted by marmosets using temporal envelope cues for lower pitch sounds composed of higher-order harmonics, whereas spectral cues are used for higher pitch sounds with lower-order harmonics. Our data support dual-pitch processing mechanisms, originally proposed by psychophysicists based on human studies, whereby pitch is extracted using a combination of temporal envelope and spectral cues. PMID:23152599

  16. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  17. Long-times series of infrasonic records at open-vents volcanoes (Yasur volcano, Vanuatu, 2003-2014): the remarkable temporal stability of magma viscosity

    NASA Astrophysics Data System (ADS)

    Vergniolle, S.; Souty, V.; Zielinski, C.; Bani, P.; LE Pichon, A.; Lardy, M.; Millier, P.; Herry, P.; Todman, S.; Garaebiti, E.

    2017-12-01

    Open-vents volcanoes, often presenting series of Strombolian explosions of various intensity, are responding, although with a delay, to any changes in the degassing pattern, providing a quasi-direct route to processes at depth. Open-vents volcanoes display a persistent volcanic activity, although of variable intensity. Long-times series at open-vents volcanoes could therefore be key measurements to unravel physical processes at the origin of Strombolian explosions and be crucial for monitoring. Continuous infrasonic records can be used to estimate the gas volume expelled at the vent during explosions (bursting of a long slug). The gas volume of each explosion is deduced from a series of two successive integrations of acoustic pressure (monopole). Here we analysed more than 4 years of infrasonic records at Yasur volcano (Vanuatu), spanning between 2003 and 2014 and organised into 8 main quasi-continuous periods. The relationship between the gas volume of each explosion and its associated maximum positive acoustic pressure, a proxy for the inner gas overpressure at bursting, shows a remarkably stable trend over the 8 periods. Two main trends exists, one which covers the full range of acoustic pressures (called « strong explosions ») and the second which represents explosions with a large gas volume and mild acoustic pressure. The class of « strong explosions » clearly follows the model of Del Bello et al. (2012), which shows that the inner gas overpressure at bursting, here empirically measured by the maximum acoustic pressure, is proportional to the gas volume. Constrains on magma viscosity and conduit radius, are deduced from this trend and from the gas volume at the transition passive-active degassing. The remarkable stability of this trend over time suggests that 1) the magma viscosity is stable at the depth where gas overpressure is produced within the slug and 2) any potential changes in magma viscosity occur very close to the top of the magma column.

  18. Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech.

    PubMed

    Khalighinejad, Bahar; Cruzatto da Silva, Guilherme; Mesgarani, Nima

    2017-02-22

    Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders. SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for dynamic processing of speech sounds in the auditory pathway. Copyright © 2017 Khalighinejad et al.

  19. Noise-induced hearing loss increases the temporal precision of complex envelope coding by auditory-nerve fibers

    PubMed Central

    Henry, Kenneth S.; Kale, Sushrut; Heinz, Michael G.

    2014-01-01

    While changes in cochlear frequency tuning are thought to play an important role in the perceptual difficulties of people with sensorineural hearing loss (SNHL), the possible role of temporal processing deficits remains less clear. Our knowledge of temporal envelope coding in the impaired cochlea is limited to two studies that examined auditory-nerve fiber responses to narrowband amplitude modulated stimuli. In the present study, we used Wiener-kernel analyses of auditory-nerve fiber responses to broadband Gaussian noise in anesthetized chinchillas to quantify changes in temporal envelope coding with noise-induced SNHL. Temporal modulation transfer functions (TMTFs) and temporal windows of sensitivity to acoustic stimulation were computed from 2nd-order Wiener kernels and analyzed to estimate the temporal precision, amplitude, and latency of envelope coding. Noise overexposure was associated with slower (less negative) TMTF roll-off with increasing modulation frequency and reduced temporal window duration. The results show that at equal stimulus sensation level, SNHL increases the temporal precision of envelope coding by 20–30%. Furthermore, SNHL increased the amplitude of envelope coding by 50% in fibers with CFs from 1–2 kHz and decreased mean response latency by 0.4 ms. While a previous study of envelope coding demonstrated a similar increase in response amplitude, the present study is the first to show enhanced temporal precision. This new finding may relate to the use of a more complex stimulus with broad frequency bandwidth and a dynamic temporal envelope. Exaggerated neural coding of fast envelope modulations may contribute to perceptual difficulties in people with SNHL by acting as a distraction from more relevant acoustic cues, especially in fluctuating background noise. Finally, the results underscore the value of studying sensory systems with more natural, real-world stimuli. PMID:24596545

  20. Observation of topological edge states of acoustic metamaterials at subwavelength scale

    NASA Astrophysics Data System (ADS)

    Dai, Hongqing; Jiao, Junrui; Xia, Baizhan; Liu, Tingting; Zheng, Shengjie; Yu, Dejie

    2018-05-01

    Topological states are of key importance for acoustic wave systems owing to their unique transport properties. In this study, we develop a hexagonal array of hexagonal columns with Helmholtz resonators to obtain subwavelength Dirac cones. Rotation operations are performed to open the Dirac cones and obtain acoustic valley vortex states. In addition, we calculate the angular-dependent frequencies for the band edges at the K-point. Through a topological phase transition, the topological phase of pattern A can change into that of pattern B. The calculations for the bulk dispersion curves show that the acoustic metamaterials exhibit BA-type and AB-type topological edge states. Experimental results demonstrate that a sound wave can transmit well along the topological path. This study could reveal a simple approach to create acoustic topological edge states at the subwavelength scale.

  1. Acoustic agglomeration methods and apparatus

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B. (Inventor)

    1984-01-01

    Methods are described for using acoustic energy to agglomerate fine particles on the order of one micron diameter that are suspended in gas, to provide agglomerates large enough for efficient removal by other techniques. The gas with suspended particles, is passed through the length of a chamber while acoustic energy at a resonant chamber mode is applied to set up one or more acoustic standing wave patterns that vibrate the suspended particles to bring them together so they agglomerate. Several widely different frequencies can be applied to efficiently vibrate particles of widely differing sizes. The standing wave pattern can be applied along directions transversed to the flow of the gas. The particles can be made to move in circles by applying acoustic energy in perpendicular directions with the energy in both directions being of the same wavelength but 90 deg out of phase.

  2. Simulation of Acoustics for Ares I Scale Model Acoustic Tests

    NASA Technical Reports Server (NTRS)

    Putnam, Gabriel; Strutzenberg, Louise L.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity acoustic measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Results from ASMAT simulations with the rocket in both held down and elevated configurations, as well as with and without water suppression have been compared to acoustic data collected from similar live-fire tests. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure.

  3. Spectral identification of sperm whales from Littoral Acoustic Demonstration Center passive acoustic recordings

    NASA Astrophysics Data System (ADS)

    Sidorovskaia, Natalia A.; Richard, Blake; Ioup, George E.; Ioup, Juliette W.

    2005-09-01

    The Littoral Acoustic Demonstration Center (LADC) made a series of passive broadband acoustic recordings in the Gulf of Mexico and Ligurian Sea to study noise and marine mammal phonations. The collected data contain a large amount of various types of sperm whale phonations, such as isolated clicks and communication codas. It was previously reported that the spectrograms of the extracted clicks and codas contain well-defined null patterns that seem to be unique for individuals. The null pattern is formed due to individual features of the sound production organs of an animal. These observations motivated the present studies of adapting human speech identification techniques for deep-diving marine mammal phonations. A three-state trained hidden Markov model (HMM) was used with the phonation spectra of sperm whales. The HHM-algorithm gave 75% accuracy in identifying individuals when it had been initially tested for the acoustic data set correlated with visual observations of sperm whales. A comparison of the identification accuracy based on null-pattern similarity analysis and the HMM-algorithm is presented. The results can establish the foundation for developing an acoustic identification database for sperm whales and possibly other deep-diving marine mammals that would be difficult to observe visually. [Research supported by ONR.

  4. Linking amphibian call structure to the environment: the interplay between phenotypic flexibility and individual attributes.

    PubMed

    Ziegler, Lucía; Arim, Matías; Narins, Peter M

    2011-05-01

    The structure of the environment surrounding signal emission produces different patterns of degradation and attenuation. The expected adjustment of calls to ensure signal transmission in an environment was formalized in the acoustic adaptation hypothesis. Within this framework, most studies considered anuran calls as fixed attributes determined by local adaptations. However, variability in vocalizations as a product of phenotypic expression has also been reported. Empirical evidence supporting the association between environment and call structure has been inconsistent, particularly in anurans. Here, we identify a plausible causal structure connecting environment, individual attributes, and temporal and spectral adjustments as direct or indirect determinants of the observed variation in call attributes of the frog Hypsiboas pulchellus. For that purpose, we recorded the calls of 40 males in the field, together with vegetation density and other environmental descriptors of the calling site. Path analysis revealed a strong effect of habitat structure on the temporal parameters of the call, and an effect of site temperature conditioning the size of organisms calling at each site and thus indirectly affecting the dominant frequency of the call. Experimental habitat modification with a styrofoam enclosure yielded results consistent with field observations, highlighting the potential role of call flexibility on detected call patterns. Both, experimental and correlative results indicate the need to incorporate the so far poorly considered role of phenotypic plasticity in the complex connection between environmental structure and individual call attributes.

  5. Singing with reduced air sac volume causes uniform decrease in airflow and sound amplitude in the zebra finch.

    PubMed

    Plummer, Emily Megan; Goller, Franz

    2008-01-01

    Song of the zebra finch (Taeniopygia guttata) is a complex temporal sequence generated by a drastic change to the regular oscillations of the normal respiratory pattern. It is not known how respiratory functions, such as supply of air volume and gas exchange, are controlled during song. To understand the integration between respiration and song, we manipulated respiration during song by injecting inert dental medium into the air sacs. Increased respiratory rate after injections indicates that the reduction of air affected quiet respiration and that birds compensated for the reduced air volume. During song, air sac pressure, tracheal airflow and sound amplitude decreased substantially with each injection. This decrease was consistently present during each expiratory pulse of the song motif irrespective of the air volume used. Few changes to the temporal pattern of song were noted, such as the increased duration of a minibreath in one bird and the decrease in duration of a long syllable in another bird. Despite the drastic reduction in air sac pressure, airflow and sound amplitude, no increase in abdominal muscle activity was seen. This suggests that during song, birds do not compensate for the reduced physiological or acoustic parameters. Neither somatosensory nor auditory feedback mechanisms appear to effect a correction in expiratory effort to compensate for reduced air sac pressure and sound amplitude.

  6. The vocal monotony of monogamy

    NASA Astrophysics Data System (ADS)

    Thomas, Jeanette

    2003-04-01

    There are four phocids in waters around Antarctica: Weddell, leopard, crabeater, and Ross seals. These four species provide a unique opportunity to examine underwater vocal behavior in species sharing the same ecosystem. Some species live in pack ice, others in factice, but all are restricted to the Antarctic or sub-Antarctic islands. All breed and produce vocalizations under water. Social systems range from polygyny in large breeding colonies, to serial monogamy, to solitary species. The type of mating system influences the number of underwater vocalizations in the repertoire, with monogamous seals producing only a single call, polygynous species producing up to 35 calls, and solitary species an intermediate number of about 10 calls. Breeding occurs during the austral spring and each species carves-out an acoustic niche for communicating, with species using different frequency ranges, temporal patterns, and amplitude changes to convey their species-specific calls and presumably reduce acoustic competition. Some species exhibit geographic variations in their vocalizations around the continent, which may reflect discrete breeding populations. Some seals become silent during a vulnerable time of predation by killer whales, perhaps to avoid detection. Overall, vocalizations of these seals exhibit adaptive characteristics that reflect the co-evolution among species in the same ecosystem.

  7. Interactions between commercial fishing and walleye pollock aggregations

    NASA Astrophysics Data System (ADS)

    Stienessen, Sarah; Wilson, Chris D.; Hallowed, Anne B.

    2002-05-01

    Scientists with the Alaska Fisheries Science Center are conducting a multiyear field experiment off the eastern side of Kodiak Island in the Gulf of Alaska to determine whether commercial fishing activities significantly affect the distribution and abundance of walleye pollock (Theragra chalcogramma), an important prey species of endangered Steller sea lions (Eumetopias jubatus). In support of this activity, spatio-temporal patterns were described for pollock aggregations. Acoustic-trawl surveys were conducted in two adjacent submarine troughs in August 2001. One trough served as a control site where fishing was prohibited and the other as a treatment site where fishing was allowed. Software, which included patch recognition algorithms, was used to extract acoustic data and generate patch size and shape-related variables to analyze fish aggregations. Important patch related descriptors included skewness, kurtosis, length, height, and density. Estimates of patch fractal dimensions, which relate school perimeter to school area, were less for juvenile than for adult aggregations, indicating a more complex school shape for adults. Comparisons of other patch descriptors were made between troughs and in the presence and absence of the fishery to determine whether trends in pollock aggregation dynamics were a result of the fishery or of naturally occurring events.

  8. Long-term passive acoustic recordings track the changing distribution of North Atlantic right whales (Eubalaena glacialis) from 2004 to 2014.

    PubMed

    Davis, Genevieve E; Baumgartner, Mark F; Bonnell, Julianne M; Bell, Joel; Berchok, Catherine; Bort Thornton, Jacqueline; Brault, Solange; Buchanan, Gary; Charif, Russell A; Cholewiak, Danielle; Clark, Christopher W; Corkeron, Peter; Delarue, Julien; Dudzinski, Kathleen; Hatch, Leila; Hildebrand, John; Hodge, Lynne; Klinck, Holger; Kraus, Scott; Martin, Bruce; Mellinger, David K; Moors-Murphy, Hilary; Nieukirk, Sharon; Nowacek, Douglas P; Parks, Susan; Read, Andrew J; Rice, Aaron N; Risch, Denise; Širović, Ana; Soldevilla, Melissa; Stafford, Kate; Stanistreet, Joy E; Summers, Erin; Todd, Sean; Warde, Ann; Van Parijs, Sofie M

    2017-10-18

    Given new distribution patterns of the endangered North Atlantic right whale (NARW; Eubalaena glacialis) population in recent years, an improved understanding of spatio-temporal movements are imperative for the conservation of this species. While so far visual data have provided most information on NARW movements, passive acoustic monitoring (PAM) was used in this study in order to better capture year-round NARW presence. This project used PAM data from 2004 to 2014 collected by 19 organizations throughout the western North Atlantic Ocean. Overall, data from 324 recorders (35,600 days) were processed and analyzed using a classification and detection system. Results highlight almost year-round habitat use of the western North Atlantic Ocean, with a decrease in detections in waters off Cape Hatteras, North Carolina in summer and fall. Data collected post 2010 showed an increased NARW presence in the mid-Atlantic region and a simultaneous decrease in the northern Gulf of Maine. In addition, NARWs were widely distributed across most regions throughout winter months. This study demonstrates that a large-scale analysis of PAM data provides significant value to understanding and tracking shifts in large whale movements over long time scales.

  9. Baleen whale infrasonic sounds: Natural variability and function

    NASA Astrophysics Data System (ADS)

    Clark, Christopher W.

    2004-05-01

    Blue and fin whales (Balaenoptera musculus and B. physalus) produce very intense, long, patterned sequences of infrasonic sounds. The acoustic characteristics of these sounds suggest strong selection for signals optimized for very long-range propagation in the deep ocean as first hypothesized by Payne and Webb in 1971. This hypothesis has been partially validated by very long-range detections using hydrophone arrays in deep water. Humpback songs recorded in deep water contain units in the 20-l00 Hz range, and these relatively simple song components are detectable out to many hundreds of miles. The mid-winter peak in the occurrence of 20-Hz fin whale sounds led Watkins to hypothesize a reproductive function similar to humpback (Megaptera novaeangliae) song, and by default this function has been extended to blue whale songs. More recent evidence shows that blue and fin whales produce infrasonic calls in high latitudes during the feeding season, and that singing is associated with areas of high productivity where females congregate to feed. Acoustic sampling over broad spatial and temporal scales for baleen species is revealing higher geographic and seasonal variability in the low-frequency vocal behaviors than previously reported, suggesting that present explanations for baleen whale sounds are too simplistic.

  10. Non-LTE radiating acoustic shocks and Ca II K2V bright points

    NASA Technical Reports Server (NTRS)

    Carlsson, Mats; Stein, Robert F.

    1992-01-01

    We present, for the first time, a self-consistent solution of the time-dependent 1D equations of non-LTE radiation hydrodynamics in solar chromospheric conditions. The vertical propagation of sinusoidal acoustic waves with periods of 30, 180, and 300 s is calculated. We find that departures from LTE and ionization recombination determine the temperature profiles of the shocks that develop. In LTE almost all the thermal energy goes into ionization, so the temperature rise is very small. In non-LTE, the finite transition rates delay the ionization to behind the shock front. The compression thus goes into thermal energy at the shock front leading to a high temperature amplitude. Further behind the shock front, the delayed ionization removes energy from the thermal pool, which reduces the temperature, producing a temperature spike. The 180 s waves reproduce the observed temporal changes in the calcium K line profiles quite well. The observed wing brightening pattern, the violet/red peak asymmetry and the observed line center behavior are all well reproduced. The short-period waves and the 5 minute period waves fail especially in reproducing the observed behavior of the wings.

  11. Acoustically Evoked Different Vibration Pattern Across the Width of the Cochlea Partition

    NASA Astrophysics Data System (ADS)

    Zha, Dingjun; Chen, Fangyi; Friderberg, Anders; Choudhury, Niloy; Nuttall, Alfred

    2011-11-01

    Using optical low coherence interferometry, the acoustically evoked vibration patterns of the basilar membrane (BM) and reticular lamina (RL) in the first turn of living guinea pigs were measured as function of the radial location. It was demonstrated that the vibration of the BM varied widely in amplitude, but little in phase across the width of the partition, while the RL had a different vibration pattern compared with the BM.

  12. A study of the acoustic-optic effect in nematics

    NASA Astrophysics Data System (ADS)

    Hayes, C. F.

    1980-12-01

    The program of this contract has been to study the acousto-optic effect which occurs in nematic liquid crystals when excited by acoustic waves. Both theory and practical application are presented. Hydrodynamic equations were solved which govern the streaming and obtained a solution for the magnitude of the fluid speed and flow pattern for a small disc shaped liquid crystal. A sample, doped with grains, was used to test the solution experimentally. A series of cells was constructed and tested which, in fact, showed that an acoustic wavefront pattern can be visualized with this technique. During the second year of the contract we developed and tested a mathematical model which prescribes how a cell should be constructed in terms of: the densities of the cell walls, liquid crystal, and surrounding fluids; the thickness of the cell walls and liquid crystal layer; the acoustic speeds in cell wall (shear and longitudinal), liquid crystal, and surrounding fluids; acoustic frequency; and the incident acoustic bean angle. Cells were also constructed and tested in which an electric field could be applied simultaneously with the acoustic wave in such a way that the sensitivity of the cell to the acoustic field could be adjusted.

  13. Seasonal bat activity related to insect emergence at three temperate lakes.

    PubMed

    Salvarina, Ioanna; Gravier, Dorian; Rothhaupt, Karl-Otto

    2018-04-01

    Knowledge of aquatic food resources entering terrestrial systems is important for food web studies and conservation planning. Bats, among other terrestrial consumers, often profit from aquatic insect emergence and their activity might be closely related to such events. However, there is a lack of studies which monitor bat activity simultaneously with aquatic insect emergence, especially from lakes. Thus, our aim was to understand the relationship between insect emergence and bat activity, and investigate whether there is a general spatial or seasonal pattern at lakeshores. We assessed whole-night bat activity using acoustic monitoring and caught emerging and aerial flying insects at three different lakes through three seasons. We predicted that insect availability and seasonality explain the variation in bat activity, independent of the lake size and characteristics. Spatial (between lakes) differences of bat activity were stronger than temporal (seasonal) differences. Bat activity did not always correlate to insect emergence, probably because other factors, such as habitat characteristics, or bats' energy requirements, play an important role as well. Aerial flying insects explained bat activity better than the emerged aquatic insects in the lake with lowest insect emergence. Bats were active throughout the night with some activity peaks, and the pattern of their activity also differed among lakes and seasons. Lakes are important habitats for bats, as they support diverse bat communities and activity throughout the night and the year when bats are active. Our study highlights that there are spatial and temporal differences in bat activity and its hourly nocturnal pattern, that should be considered when investigating aquatic-terrestrial interactions or designing conservation and monitoring plans.

  14. The North Pacific Acoustic Laboratory deep-water acoustic propagation experiments in the Philippine Sea.

    PubMed

    Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Dushaw, Brian D; Baggeroer, Arthur B; Heaney, Kevin D; D'Spain, Gerald L; Colosi, John A; Stephen, Ralph A; Kemp, John N; Howe, Bruce M; Van Uffelen, Lora J; Wage, Kathleen E

    2013-10-01

    A series of experiments conducted in the Philippine Sea during 2009-2011 investigated deep-water acoustic propagation and ambient noise in this oceanographically and geologically complex region: (i) the 2009 North Pacific Acoustic Laboratory (NPAL) Pilot Study/Engineering Test, (ii) the 2010-2011 NPAL Philippine Sea Experiment, and (iii) the Ocean Bottom Seismometer Augmentation of the 2010-2011 NPAL Philippine Sea Experiment. The experimental goals included (a) understanding the impacts of fronts, eddies, and internal tides on acoustic propagation, (b) determining whether acoustic methods, together with other measurements and ocean modeling, can yield estimates of the time-evolving ocean state useful for making improved acoustic predictions, (c) improving our understanding of the physics of scattering by internal waves and spice, (d) characterizing the depth dependence and temporal variability of ambient noise, and (e) understanding the relationship between the acoustic field in the water column and the seismic field in the seafloor. In these experiments, moored and ship-suspended low-frequency acoustic sources transmitted to a newly developed distributed vertical line array receiver capable of spanning the water column in the deep ocean. The acoustic transmissions and ambient noise were also recorded by a towed hydrophone array, by acoustic Seagliders, and by ocean bottom seismometers.

  15. Time-resolved measurement of global synchronization in the dust acoustic wave

    NASA Astrophysics Data System (ADS)

    Williams, J. D.

    2014-10-01

    A spatially and temporally resolved measurement of the synchronization of the naturally occurring dust acoustic wave to an external drive and the relaxation from the driven wave mode back to the naturally occuring wave mode is presented. This measurement provides a time-resolved measurement of the synchronization of the self-excited dust acoustic wave with an external drive and the return to the self-excited mode. It is observed that the wave synchronizes to the external drive in a distinct time-dependent fashion, while there is an immediate loss of synchronization when the external modulation is discontinued.

  16. The acoustic features of human laughter

    NASA Astrophysics Data System (ADS)

    Bachorowski, Jo-Anne; Owren, Michael J.

    2002-05-01

    Remarkably little is known about the acoustic features of laughter, despite laughter's ubiquitous role in human vocal communication. Outcomes are described for 1024 naturally produced laugh bouts recorded from 97 young adults. Acoustic analysis focused on temporal characteristics, production modes, source- and filter-related effects, and indexical cues to laugher sex and individual identity. The results indicate that laughter is a remarkably complex vocal signal, with evident diversity in both production modes and fundamental frequency characteristics. Also of interest was finding a consistent lack of articulation effects in supralaryngeal filtering. Outcomes are compared to previously advanced hypotheses and conjectures about this species-typical vocal signal.

  17. How do auditory cortex neurons represent communication sounds?

    PubMed

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Diel patterns and temporal trends in spawning activities of Robust Redhorse and River Redhorse in Georgia, assessed using passive acoustic monitoring

    USGS Publications Warehouse

    Straight, Carrie A.; Jackson, C. Rhett; Freeman, Byron J.; Freeman, Mary C.

    2015-01-01

    The conservation of imperiled species depends upon understanding threats to the species at each stage of its life history. In the case of many imperiled migratory fishes, understanding how timing and environmental influences affect reproductive behavior could provide managers with information critical for species conservation. We used passive acoustic recorders to document spawning activities for two large-bodied catostomids (Robust Redhorse Moxostoma robustum in the Savannah and Broad rivers, Georgia, and River Redhorse M. carinatum in the Coosawattee River, Georgia) in relation to time of day, water temperature, discharge variation, moonlight, and weather. Robust Redhorse spawning activities in the Savannah and Broad rivers were more frequent at night or in the early morning (0100–0400 hours and 0800–1000 hours, respectively) and less frequent near midday (1300 hours). Spawning attempts in the Savannah and Broad rivers increased over a 3–4-d period and then declined. River Redhorse spawning activities in the Coosawattee River peaked on the first day of recording and declined over four subsequent days; diel patterns were less discernible, although moon illumination was positively associated with spawning rates, which was also observed for Robust Redhorses in the Savannah River. Spawning activity in the Savannah and Broad rivers was negatively associated with water temperature, and spawning activity increased in association with cloud cover in the Savannah River. A large variation in discharge was only measured in the flow-regulated Savannah River and was not associated with spawning attempts. To our knowledge, this is the first study to show diel and multiday patterns in spawning activities for anyMoxostoma species. These patterns and relationships between the environment and spawning activities could provide important information for the management of these species downstream of hydropower facilities.

  19. Representing the Hyphen in Action-Effect Associations: Automatic Acquisition and Bidirectional Retrieval of Action-Effect Intervals

    ERIC Educational Resources Information Center

    Dignath, David; Pfister, Roland; Eder, Andreas B.; Kiesel, Andrea; Kunde, Wilfried

    2014-01-01

    We examined whether a temporal interval between an action and its sensory effect is integrated in the cognitive action structure in a bidirectional fashion. In 3 experiments, participants first experienced that actions produced specific acoustic effects (high and low tones) that occurred temporally delayed after their actions. In a following test…

  20. If you can't take the room out of your mix, you can't take your mix out of the room!

    NASA Astrophysics Data System (ADS)

    D'Antonio, Peter

    2003-04-01

    The key issue in any recording studio is transferability-the ability of a mix to transfer to other listening environments outside the studio. For a mix to faithfully transfer to a wide range of acoustical environments, it must be created in a room with minimal acoustic distortion. The music industry is very aware of electronic distortion; however, the audible effects of acoustic distortion are only now being fully appreciated. The four forms of acoustic distortion are modal coupling, speaker boundary interference response, comb filtering and poor diffusion or a sparse spatial and temporal reflection density. These phenomena will be explained and methods to minimize them will be suggested.

  1. Impedance matched joined drill pipe for improved acoustic transmission

    DOEpatents

    Moss, William C.

    2000-01-01

    An impedance matched jointed drill pipe for improved acoustic transmission. A passive means and method that maximizes the amplitude and minimize the temporal dispersion of acoustic signals that are sent through a drill string, for use in a measurement while drilling telemetry system. The improvement in signal transmission is accomplished by replacing the standard joints in a drill string with joints constructed of a material that is impedance matched acoustically to the end of the drill pipe to which it is connected. Provides improvement in the measurement while drilling technique which can be utilized for well logging, directional drilling, and drilling dynamics, as well as gamma-ray spectroscopy while drilling post shot boreholes, such as utilized in drilling post shot boreholes.

  2. Acoustic vibrations contribute to the diffuse scatter produced by ribosome crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polikanov, Yury S.; Moore, Peter B.

    2015-09-26

    The diffuse scattering pattern produced by frozen crystals of the 70S ribosome fromThermus thermophilusis as highly structured as it would be if it resulted entirely from domain-scale motions within these particles. However, the qualitative properties of the scattering pattern suggest that acoustic displacements of the crystal lattice make a major contribution to it.

  3. Competing streams at the cocktail party: Exploring the mechanisms of attention and temporal integration

    PubMed Central

    Xiang, Juanjuan; Simon, Jonathan; Elhilali, Mounya

    2010-01-01

    Processing of complex acoustic scenes depends critically on the temporal integration of sensory information as sounds evolve naturally over time. It has been previously speculated that this process is guided by both innate mechanisms of temporal processing in the auditory system, as well as top-down mechanisms of attention, and possibly other schema-based processes. In an effort to unravel the neural underpinnings of these processes and their role in scene analysis, we combine Magnetoencephalography (MEG) with behavioral measures in humans in the context of polyrhythmic tone sequences. While maintaining unchanged sensory input, we manipulate subjects’ attention to one of two competing rhythmic streams in the same sequence. The results reveal that the neural representation of the attended rhythm is significantly enhanced both in its steady-state power and spatial phase coherence relative to its unattended state, closely correlating with its perceptual detectability for each listener. Interestingly, the data reveals a differential efficiency of rhythmic rates of the order of few hertz during the streaming process, closely following known neural and behavioral measures of temporal modulation sensitivity in the auditory system. These findings establish a direct link between known temporal modulation tuning in the auditory system (particularly at the level of auditory cortex) and the temporal integration of perceptual features in a complex acoustic scene, while mediated by processes of attention. PMID:20826671

  4. Laser Imaging of Airborne Acoustic Emission by Nonlinear Defects

    NASA Astrophysics Data System (ADS)

    Solodov, Igor; Döring, Daniel; Busse, Gerd

    2008-06-01

    Strongly nonlinear vibrations of near-surface fractured defects driven by an elastic wave radiate acoustic energy into adjacent air in a wide frequency range. The variations of pressure in the emitted airborne waves change the refractive index of air thus providing an acoustooptic interaction with a collimated laser beam. Such an air-coupled vibrometry (ACV) is proposed for detecting and imaging of acoustic radiation of nonlinear spectral components by cracked defects. The photoelastic relation in air is used to derive induced phase modulation of laser light in the heterodyne interferometer setup. The sensitivity of the scanning ACV to different spatial components of the acoustic radiation is analyzed. The animated airborne emission patterns are visualized for the higher harmonic and frequency mixing fields radiated by planar defects. The results confirm a high localization of the nonlinear acoustic emission around the defects and complicated directivity patterns appreciably different from those observed for fundamental frequencies.

  5. Contactless microparticle control via ultrahigh frequency needle type single beam acoustic tweezers

    NASA Astrophysics Data System (ADS)

    Fei, Chunlong; Li, Ying; Zhu, Benpeng; Chiu, Chi Tat; Chen, Zeyu; Li, Di; Yang, Yintang; Kirk Shung, K.; Zhou, Qifa

    2016-10-01

    This paper reports on contactless microparticle manipulation including single-particle controlled trapping, transportation, and patterning via single beam acoustic radiation forces. As the core component of single beam acoustic tweezers, a needle type ultrasonic transducer was designed and fabricated with center frequency higher than 300 MHz and -6 dB fractional bandwidth as large as 64%. The transducer was built for an f-number close to 1.0, and the desired focal depth was achieved by press-focusing technology. Its lateral resolution was measured to be better than 6.7 μm by scanning a 4 μm tungsten wire target. Tightly focused acoustic beam produced by the transducer was shown to be capable of manipulating individual microspheres as small as 3 μm. "USC" patterning with 15 μm microspheres was demonstrated without affecting nearby microspheres. These promising results may expand the applications in biomedical and biophysical research of single beam acoustic tweezers.

  6. An Adaptive Multiscale Finite Element Method for Large Scale Simulations

    DTIC Science & Technology

    2015-09-28

    Illinois at Urbana-Champaign Abstract Hypersonic vehicles are subjected to extreme acoustic, thermal and mechanical loading with strong spatial and temporal...07/15/2012 Reporting Period End Date 07/14/2015 Abstract Hypersonic vehicles are subjected to extreme acoustic, thermal and mechanical loading with...gradients and for extended periods of time. Long duration, 3-D simulations of non-linear response of these vehicles , is prohibitively expensive using

  7. First Detection of the Acoustic Oscillation Phase Shift Expected from the Cosmic Neutrino Background.

    PubMed

    Follin, Brent; Knox, Lloyd; Millea, Marius; Pan, Zhen

    2015-08-28

    The unimpeded relativistic propagation of cosmological neutrinos prior to recombination of the baryon-photon plasma alters gravitational potentials and therefore the details of the time-dependent gravitational driving of acoustic oscillations. We report here a first detection of the resulting shifts in the temporal phase of the oscillations, which we infer from their signature in the cosmic microwave background temperature power spectrum.

  8. Response of the human tympanic membrane to transient acoustic and mechanical stimuli: Preliminary results.

    PubMed

    Razavi, Payam; Ravicz, Michael E; Dobrev, Ivo; Cheng, Jeffrey Tao; Furlong, Cosme; Rosowski, John J

    2016-10-01

    The response of the tympanic membrane (TM) to transient environmental sounds and the contributions of different parts of the TM to middle-ear sound transmission were investigated by measuring the TM response to global transients (acoustic clicks) and to local transients (mechanical impulses) applied to the umbo and various locations on the TM. A lightly-fixed human temporal bone was prepared by removing the ear canal, inner ear, and stapes, leaving the incus, malleus, and TM intact. Motion of nearly the entire TM was measured by a digital holography system with a high speed camera at a rate of 42 000 frames per second, giving a temporal resolution of <24 μs for the duration of the TM response. The entire TM responded nearly instantaneously to acoustic transient stimuli, though the peak displacement and decay time constant varied with location. With local mechanical transients, the TM responded first locally at the site of stimulation, and the response spread approximately symmetrically and circumferentially around the umbo and manubrium. Acoustic and mechanical transients provide distinct and complementary stimuli for the study of TM response. Spatial variations in decay and rate of spread of response imply local variations in TM stiffness, mass, and damping. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Response of the human tympanic membrane to transient acoustic and mechanical stimuli: Preliminary results

    PubMed Central

    Razavi, Payam; Ravicz, Michael E.; Dobrev, Ivo; Cheng, Jeffrey Tao; Furlong, Cosme; Rosowski, John J.

    2016-01-01

    The response of the tympanic membrane (TM) to transient environmental sounds and the contributions of different parts of the TM to middle-ear sound transmission were investigated by measuring the TM response to global transients (acoustic clicks) and to local transients (mechanical impulses) applied to the umbo and various locations on the TM. A lightly-fixed human temporal bone was prepared by removing the ear canal, inner ear, and stapes, leaving the incus, malleus, and TM intact. Motion of nearly the entire TM was measured by a digital holography system with a high speed camera at a rate of 42 000 frames per second, giving a temporal resolution of <24 μs for the duration of the TM response. The entire TM responded nearly instantaneously to acoustic transient stimuli, though the peak displacement and decay time constant varied with location. With local mechanical transients, the TM responded first locally at the site of stimulation, and the response spread approximately symmetrically and circumferentially around the umbo and manubrium. Acoustic and mechanical transients provide distinct and complementary stimuli for the study of TM response. Spatial variations in decay and rate of spread of response imply local variations in TM stiffness, mass, and damping. PMID:26880098

  10. A new malleostapedotomy prosthesis. Experimental analysis by laser doppler vibrometer in fresh cadaver temporal bones.

    PubMed

    Vallejo, Luis A; Manzano, María T; Hidalgo, Antonio; Hernández, Alberto; Sabas, Juan; Lara, Hugo; Gil-Carcedo, Elisa; Herrero, David

    One of the problems with total ossicular replacement prostheses is their stability. Prosthesis dislocations and extrusions are common in middle ear surgery. This is due to variations in endo-tympanic pressure as well as design defects. The design of this new prosthesis reduces this problem by being joined directly to the malleus handle. The aim of this study is to confirm adequate acoustic-mechanical behaviour in fresh cadaver middle ear of a new total ossicular replacement prosthesis, designed using the finite elements method. Using the doppler vibrometer laser, we analysed the acoustic-mechanical behaviour of a new total ossicular replacement prosthesis in the human middle ear using 10 temporal bones from fresh cadavers. The transfer function of the ears in which we implanted the new prosthesis was superimposed over the non-manipulated ear. This suggests optimum acoustic-mechanical behaviour. The titanium prosthesis analysed in this study demonstrated optimum acoustic-mechanical behaviour. Together with its ease of implantation and post-surgical stability, these factors make it a prosthesis to be kept in mind in ossicular reconstruction. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  11. Temporally selective attention supports speech processing in 3- to 5-year-old children.

    PubMed

    Astheimer, Lori B; Sanders, Lisa D

    2012-01-01

    Recent event-related potential (ERP) evidence demonstrates that adults employ temporally selective attention to preferentially process the initial portions of words in continuous speech. Doing so is an effective listening strategy since word-initial segments are highly informative. Although the development of this process remains unexplored, directing attention to word onsets may be important for speech processing in young children who would otherwise be overwhelmed by the rapidly changing acoustic signals that constitute speech. We examined the use of temporally selective attention in 3- to 5-year-old children listening to stories by comparing ERPs elicited by attention probes presented at four acoustically matched times relative to word onsets: concurrently with a word onset, 100 ms before, 100 ms after, and at random control times. By 80 ms, probes presented at and after word onsets elicited a larger negativity than probes presented before word onsets or at control times. The latency and distribution of this effect is similar to temporally and spatially selective attention effects measured in adults and, despite differences in polarity, spatially selective attention effects measured in children. These results indicate that, like adults, preschool aged children modulate temporally selective attention to preferentially process the initial portions of words in continuous speech. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Temporal modulations in speech and music.

    PubMed

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Acoustic methods for cavitation mapping in biomedical applications

    NASA Astrophysics Data System (ADS)

    Wan, M.; Xu, S.; Ding, T.; Hu, H.; Liu, R.; Bai, C.; Lu, S.

    2015-12-01

    In recent years, cavitation is increasingly utilized in a wide range of applications in biomedical field. Monitoring the spatial-temporal evolution of cavitation bubbles is of great significance for efficiency and safety in biomedical applications. In this paper, several acoustic methods for cavitation mapping proposed or modified on the basis of existing work will be presented. The proposed novel ultrasound line-by-line/plane-by-plane method can depict cavitation bubbles distribution with high spatial and temporal resolution and may be developed as a potential standard 2D/3D cavitation field mapping method. The modified ultrafast active cavitation mapping based upon plane wave transmission and reception as well as bubble wavelet and pulse inversion technique can apparently enhance the cavitation to tissue ratio in tissue and further assist in monitoring the cavitation mediated therapy with good spatial and temporal resolution. The methods presented in this paper will be a foundation to promote the research and development of cavitation imaging in non-transparent medium.

  14. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    PubMed Central

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  15. Portable Multi Hydrophone Array for Field and Laboratory Measurements of Odontocete Acoustic Signals

    DTIC Science & Technology

    2014-09-30

    false killer whale . Our analysis will also be conducted with current passive acoustic monitoring detectors and classifiers in order to assess if the...obtain horizontal and vertical beam patterns of acoustic signals of a false killer whale and a bottlenose dolphin. The data is currently being

  16. Shallow water acoustic backscatter and reverberation measurements using a 68-kHz cylindrical array

    NASA Astrophysics Data System (ADS)

    Gallaudet, Timothy Cole

    2001-10-01

    The characterization of high frequency, shallow water acoustic backscatter and reverberation is important because acoustic systems are used in many scientific, commercial, and military applications. The approach taken is to use data collected by the Toroidal Volume Search Sonar (TVSS), a 68 kHz multibeam sonar capable of 360° imaging in a vertical plane perpendicular to its direction of travel. With this unique capability, acoustic backscatter imagery of the seafloor, sea surface, and horizontal and vertical planes in the volume are constructed from data obtained in 200m deep waters in the Northeastern Gulf of Mexico when the TVSS was towed 78m below the surface, 735m astern of a towship. The processed imagery provide a quasi-synoptic characterization of the spatial and temporal structure of boundary and volume acoustic backscatter and reverberation. Diffraction, element patterns, and high sidelobe levels are shown to be the most serious problems affecting cylindrical arrays such as the TVSS, and an amplitude shading method is presented for reducing the peak sidelobe levels of irregular-line and non-coplanar arrays. Errors in the towfish's attitude and motion sensor, and irregularities in the TVSS's transmitted beampattern produce artifacts in the TVSS-derived bathymetry and seafloor acoustic backscatter imagery. Correction strategies for these problems are described, which are unique in that they use environmental information extracted from both ocean boundaries. Sea surface and volume acoustic backscatter imagery are used to explore and characterize the structure of near-surface bubble clouds, schooling fish, and zooplankton. The simultaneous horizontal and vertical coverage provided by the TVSS is shown to be a primary advantage, motivating further use of multibeam sonars in these applications. Whereas boundary backscatter fluctuations are well described by Weibull, K, and Rayleigh mixture probability distributions, those corresponding to volume backscatter are multi-modal, with the log-normal distribution providing the best fits to the centers of the distributions, and the Rayleigh mixture models providing the best fits to the tails of the distributions. The largest distribution tails result from resonant microbubbles and patchy aggregations of zooplankton. The Office of Naval Research funded this work under ONR-NRL Contract No. N00014-96-1-G9I3.

  17. A comparison of traffic estimates of nocturnal flying animals using radar, thermal imaging, and acoustic recording.

    PubMed

    Horton, Kyle G; Shriver, W Gregory; Buler, Jeffrey J

    2015-03-01

    There are several remote-sensing tools readily available for the study of nocturnally flying animals (e.g., migrating birds), each possessing unique measurement biases. We used three tools (weather surveillance radar, thermal infrared camera, and acoustic recorder) to measure temporal and spatial patterns of nocturnal traffic estimates of flying animals during the spring and fall of 2011 and 2012 in Lewes, Delaware, USA. Our objective was to compare measures among different technologies to better understand their animal detection biases. For radar and thermal imaging, the greatest observed traffic rate tended to occur at, or shortly after, evening twilight, whereas for the acoustic recorder, peak bird flight-calling activity was observed just prior to morning twilight. Comparing traffic rates during the night for all seasons, we found that mean nightly correlations between acoustics and the other two tools were weakly correlated (thermal infrared camera and acoustics, r = 0.004 ± 0.04 SE, n = 100 nights; radar and acoustics, r = 0.14 ± 0.04 SE, n = 101 nights), but highly variable on an individual nightly basis (range = -0.84 to 0.92, range = -0.73 to 0.94). The mean nightly correlations between traffic rates estimated by radar and by thermal infrared camera during the night were more strongly positively correlated (r = 0.39 ± 0.04 SE, n = 125 nights), but also were highly variable for individual nights (range = -0.76 to 0.98). Through comparison with radar data among numerous height intervals, we determined that flying animal height above the ground influenced thermal imaging positively and flight call detections negatively. Moreover, thermal imaging detections decreased with the presence of cloud cover and increased with mean ground flight speed of animals, whereas acoustic detections showed no relationship with cloud cover presence but did decrease with increased flight speed. We found sampling methods to be positively correlated when comparing mean nightly traffic rates across nights. The strength of these correlations generally increased throughout the night, peaking 2-3 hours before morning twilight. Given the convergence of measures by different tools at this time, we suggest that researchers consider sampling flight activity in the hours before morning twilight when differences due to detection biases among sampling tools appear to be minimized.

  18. Can animal habitat use patterns influence their vulnerability to extreme climate events? An estuarine sportfish case study.

    PubMed

    Boucek, Ross E; Heithaus, Michael R; Santos, Rolando; Stevens, Philip; Rehage, Jennifer S

    2017-10-01

    Global climate forecasts predict changes in the frequency and intensity of extreme climate events (ECEs). The capacity for specific habitat patches within a landscape to modulate stressors from extreme climate events, and animal distribution throughout habitat matrices during events, could influence the degree of population level effects following the passage of ECEs. Here, we ask (i) does the intensity of stressors of an ECE vary across a landscape? And (ii) Do habitat use patterns of a mobile species influence their vulnerability to ECEs? Specifically, we measured how extreme cold spells might interact with temporal variability in habitat use to affect populations of a tropical, estuarine-dependent large-bodied fish Common Snook, within Everglades National Park estuaries (FL US). We examined temperature variation across the estuary during cold disturbances with different degrees of severity, including an extreme cold spell. Second, we quantified Snook distribution patterns when the passage of ECEs is most likely to occur from 2012 to 2016 using passive acoustic tracking. Our results revealed spatial heterogeneity in the intensity of temperature declines during cold disturbances, with some habitats being consistently 3-5°C colder than others. Surprisingly, Snook distributions during periods of greatest risk to experience an extreme cold event varied among years. During the winters of 2013-2014 and 2014-2015 a greater proportion of Snook occurred in the colder habitats, while the winters of 2012-2013 and 2015-2016 featured more Snook observed in the warmest habitats. This study shows that Snook habitat use patterns could influence vulnerability to extreme cold events, however, whether Snook habitat use increases or decreases their vulnerability to disturbance depends on the year, creating temporally dynamic vulnerability. Faunal global change research should address the spatially explicit nature of extreme climate events and animal habitat use patterns to identify potential mechanisms that may influence population effects following these disturbances. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  19. Topography of acute stroke in a sample of 439 right brain damaged patients.

    PubMed

    Sperber, Christoph; Karnath, Hans-Otto

    2016-01-01

    Knowledge of the typical lesion topography and volumetry is important for clinical stroke diagnosis as well as for anatomo-behavioral lesion mapping analyses. Here we used modern lesion analysis techniques to examine the naturally occurring lesion patterns caused by ischemic and by hemorrhagic infarcts in a large, representative acute stroke patient sample. Acute MR and CT imaging of 439 consecutively admitted right-hemispheric stroke patients from a well-defined catchment area suffering from ischemia (n = 367) or hemorrhage (n = 72) were normalized and mapped in reference to stereotaxic anatomical atlases. For ischemic infarcts, highest frequencies of stroke were observed in the insula, putamen, operculum and superior temporal cortex, as well as the inferior and superior occipito-frontal fascicles, superior longitudinal fascicle, uncinate fascicle, and the acoustic radiation. The maximum overlay of hemorrhages was located more posteriorly and more medially, involving posterior areas of the insula, Heschl's gyrus, and putamen. Lesion size was largest in frontal and anterior areas and lowest in subcortical and posterior areas. The large and unbiased sample of stroke patients used in the present study accumulated the different sub-patterns to identify the global topographic and volumetric pattern of right hemisphere stroke in humans.

  20. Frequency and time pattern differences in acoustic signals produced by Prostephanus truncatus (Horn) (Coleoptera: Bostrichidae) and Sitophilus zeamais (Motschulsky) (Coleoptera: Curculionidae) in stored maize

    USDA-ARS?s Scientific Manuscript database

    The acoustic signals emitted by the last stage larval instars and adults of Prostephanus truncatus and Sitophilus zeamais in stored maize were investigated. Analyses were performed to identify brief, 1-10-ms broadband sound impulses of five different frequency patterns produced by larvae and adults,...

  1. Prosodic domain-initial effects on the acoustic structure of vowels

    NASA Astrophysics Data System (ADS)

    Fox, Robert Allen; Jacewicz, Ewa; Salmons, Joseph

    2003-10-01

    In the process of language change, vowels tend to shift in ``chains,'' leading to reorganizations of entire vowel systems over time. A long research tradition has described such patterns, but little is understood about what factors motivate such shifts. Drawing data from changes in progress in American English dialects, the broad hypothesis is tested that changes in vowel systems are related to prosodic organization and stress patterns. Changes in vowels under greater prosodic prominence correlate directly with, and likely underlie, historical patterns of shift. This study examines acoustic characteristics of vowels at initial edges of prosodic domains [Fougeron and Keating, J. Acoust. Soc. Am. 101, 3728-3740 (1997)]. The investigation is restricted to three distinct prosodic levels: utterance (sentence-initial), phonological phrase (strong branch of a foot), and syllable (weak branch of a foot). The predicted changes in vowels /e/ and /ɛ/ in two American English dialects (from Ohio and Wisconsin) are examined along a set of acoustic parameters: duration, formant frequencies (including dynamic changes over time), and fundamental frequency (F0). In addition to traditional methodology which elicits list-like intonation, a design is adapted to examine prosodic patterns in more typical sentence intonations. [Work partially supported by NIDCD R03 DC005560-01.

  2. Prediction of the Acoustic Field Associated with Instability Wave Source Model for a Compressible Jet

    NASA Technical Reports Server (NTRS)

    Golubev, Vladimir; Mankbadi, Reda R.; Dahl, Milo D.; Kiraly, L. James (Technical Monitor)

    2002-01-01

    This paper provides preliminary results of the study of the acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. The source model is briefly discussed first followed by the analysis of the produced acoustic directivity pattern. Two integral surface techniques are discussed and compared for prediction of the jet acoustic radiation field.

  3. Variability in English vowels is comparable in articulation and acoustics

    PubMed Central

    Noiray, Aude; Iskarous, Khalil; Whalen, D. H.

    2014-01-01

    The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1-F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ε, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ε/ and /ε-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that was also reflected in acoustics with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast. PMID:25101144

  4. Automatic Activation of Phonological Templates for Native but Not Nonnative Phonemes: An Investigation of the Temporal Dynamics of Mu Activation

    ERIC Educational Resources Information Center

    Santos-Oliveira, Daniela Cristina

    2017-01-01

    Models of speech perception suggest a dorsal stream connecting the temporal and inferior parietal lobe with the inferior frontal gyrus. This stream is thought to involve an auditory motor loop that translates acoustic information into motor/articulatory commands and is further influenced by decision making processes that involve maintenance of…

  5. Effect of time-varying tropospheric models on near-regional and regional infrasound propagation as constrained by observational data

    NASA Astrophysics Data System (ADS)

    McKenna, Mihan H.; Stump, Brian W.; Hayward, Chris

    2008-06-01

    The Chulwon Seismo-Acoustic Array (CHNAR) is a regional seismo-acoustic array with co-located seismometers and infrasound microphones on the Korean peninsula. Data from forty-two days over the course of a year between October 1999 and August 2000 were analyzed; 2052 infrasound-only arrivals and 23 seismo-acoustic arrivals were observed over the six week study period. A majority of the signals occur during local working hours, hour 0 to hour 9 UT and appear to be the result of cultural activity located within a 250 km radius. Atmospheric modeling is presented for four sample days during the study period, one in each of November, February, April, and August. Local meteorological data sampled at six hour intervals is needed to accurately model the observed arrivals and this data produced highly temporally variable thermal ducts that propagated infrasound signals within 250 km, matching the temporal variation in the observed arrivals. These ducts change dramatically on the order of hours, and meteorological data from the appropriate sampled time frame was necessary to interpret the observed arrivals.

  6. Acoustic Cluster Therapy: In Vitro and Ex Vivo Measurement of Activated Bubble Size Distribution and Temporal Dynamics.

    PubMed

    Healey, Andrew John; Sontum, Per Christian; Kvåle, Svein; Eriksen, Morten; Bendiksen, Ragnar; Tornes, Audun; Østensen, Jonny

    2016-05-01

    Acoustic cluster technology (ACT) is a two-component, microparticle formulation platform being developed for ultrasound-mediated drug delivery. Sonazoid microbubbles, which have a negative surface charge, are mixed with micron-sized perfluoromethylcyclopentane droplets stabilized with a positively charged surface membrane to form microbubble/microdroplet clusters. On exposure to ultrasound, the oil undergoes a phase change to the gaseous state, generating 20- to 40-μm ACT bubbles. An acoustic transmission technique is used to measure absorption and velocity dispersion of the ACT bubbles. An inversion technique computes bubble size population with temporal resolution of seconds. Bubble populations are measured both in vitro and in vivo after activation within the cardiac chambers of a dog model, with catheter-based flow through an extracorporeal measurement flow chamber. Volume-weighted mean diameter in arterial blood after activation in the left ventricle was 22 μm, with no bubbles >44 μm in diameter. After intravenous administration, 24.4% of the oil is activated in the cardiac chambers. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  7. Tomographic reconstruction of atmospheric turbulence with the use of time-dependent stochastic inversion.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Ziemann, A; Wilson, D Keith; Arnold, K; Barth, M

    2007-09-01

    Acoustic travel-time tomography allows one to reconstruct temperature and wind velocity fields in the atmosphere. In a recently published paper [S. Vecherin et al., J. Acoust. Soc. Am. 119, 2579 (2006)], a time-dependent stochastic inversion (TDSI) was developed for the reconstruction of these fields from travel times of sound propagation between sources and receivers in a tomography array. TDSI accounts for the correlation of temperature and wind velocity fluctuations both in space and time and therefore yields more accurate reconstruction of these fields in comparison with algebraic techniques and regular stochastic inversion. To use TDSI, one needs to estimate spatial-temporal covariance functions of temperature and wind velocity fluctuations. In this paper, these spatial-temporal covariance functions are derived for locally frozen turbulence which is a more general concept than a widely used hypothesis of frozen turbulence. The developed theory is applied to reconstruction of temperature and wind velocity fields in the acoustic tomography experiment carried out by University of Leipzig, Germany. The reconstructed temperature and velocity fields are presented and errors in reconstruction of these fields are studied.

  8. Remote Acoustic Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    Watson, Z.; Hart, M.

    Identification and characterization of orbiting objects that are not spatially resolved are challenging problems for traditional remote sensing methods. Hyper temporal imaging, enabled by fast, low-noise electro-optical detectors is a new sensing modality which may allow the direct detection of acoustic resonances on satellites enabling a new regime of signature and state detection. Detectable signatures may be caused by the oscillations of solar panels, high-gain antennae, or other on-board subsystems driven by thermal gradients, fluctuations in solar radiation pressure, worn reaction wheels, or orbit maneuvers. Herein we present the first hyper-temporal observations of geosynchronous satellites. Data were collected at the Kuiper 1.54-meter telescope in Arizona using an experimental dual-channel imaging instrument that simultaneously measures light in two orthogonally polarized beams at sampling rates extending up to 1 kHz. In these observations, we see evidence of acoustic resonances in the polarization state of satellites. The technique is expected to support object identification and characterization of on-board components and to act as a discriminant between active satellites, debris, and passive bodies.

  9. Spatio-Temporal Dynamics of Field Cricket Calling Behaviour: Implications for Female Mate Search and Mate Choice.

    PubMed

    Nandi, Diptarup; Balakrishnan, Rohini

    2016-01-01

    Amount of calling activity (calling effort) is a strong determinant of male mating success in species such as orthopterans and anurans that use acoustic communication in the context of mating behaviour. While many studies in crickets have investigated the determinants of calling effort, patterns of variability in male calling effort in natural choruses remain largely unexplored. Within-individual variability in calling activity across multiple nights of calling can influence female mate search and mate choice strategies. Moreover, calling site fidelity across multiple nights of calling can also affect the female mate sampling strategy. We therefore investigated the spatio-temporal dynamics of acoustic signaling behaviour in a wild population of the field cricket species Plebeiogryllus guttiventris. We first studied the consistency of calling activity by quantifying variation in male calling effort across multiple nights of calling using repeatability analysis. Callers were inconsistent in their calling effort across nights and did not optimize nightly calling effort to increase their total number of nights spent calling. We also estimated calling site fidelity of males across multiple nights by quantifying movement of callers. Callers frequently changed their calling sites across calling nights with substantial displacement but without any significant directionality. Finally, we investigated trade-offs between within-night calling effort and energetically expensive calling song features such as call intensity and chirp rate. Calling effort was not correlated with any of the calling song features, suggesting that energetically expensive song features do not constrain male calling effort. The two key features of signaling behaviour, calling effort and call intensity, which determine the duration and spatial coverage of the sexual signal, are therefore uncorrelated and function independently.

  10. Spatial Patterns of Inshore Marine Soundscapes.

    PubMed

    McWilliam, Jamie

    2016-01-01

    Passive acoustic monitoring was employed to investigate spatial patterns of soundscapes within a marine reserve. High energy level broadband snaps dominated nearly all habitat soundscapes. Snaps, the principal acoustic feature of soundscapes, were primarily responsible for the observed spatial patterns, and soundscapes appeared to retain a level of compositional and configurational stability. In the presence of high-level broadband snaps, soundscape composition was more influenced by geographic location than habitat type. Future research should focus on investigating the spatial patterns of soundscapes across a wider range of coastal and offshore seascapes containing a variety of distinct ecosystems and habitats.

  11. Identification of Damaged Wheat Kernels and Cracked-Shell Hazelnuts with Impact Acoustics Time-Frequency Patterns

    USDA-ARS?s Scientific Manuscript database

    A new adaptive time-frequency (t-f) analysis and classification procedure is applied to impact acoustic signals for detecting hazelnuts with cracked shells and three types of damaged wheat kernels. Kernels were dropped onto a steel plate, and the resulting impact acoustic signals were recorded with ...

  12. ASTRYD: A new numerical tool for aircraft cabin and environmental noise prediction

    NASA Astrophysics Data System (ADS)

    Berhault, J.-P.; Venet, G.; Clerc, C.

    ASTRYD is an analytical tool, developed originally for underwater applications, that computes acoustic pressure distribution around three-dimensional bodies in closed spaces like aircraft cabins. The program accepts data from measurements or other simulations, processes them in the time domain, and delivers temporal evolutions of the acoustic pressures and accelerations, as well as the radiated/diffracted pressure at arbitrary points located in the external/internal space. A typical aerospace application is prediction of acoustic load on satellites during the launching phase. An aeronautic application is engine noise distribution on a business jet body for prediction of environmental and cabin noise.

  13. Acoustic reciprocity: An extension to spherical harmonics domain.

    PubMed

    Samarasinghe, Prasanga; Abhayapala, Thushara D; Kellermann, Walter

    2017-10-01

    Acoustic reciprocity is a fundamental property of acoustic wavefields that is commonly used to simplify the measurement process of many practical applications. Traditionally, the reciprocity theorem is defined between a monopole point source and a point receiver. Intuitively, it must apply to more complex transducers than monopoles. In this paper, the authors formulate the acoustic reciprocity theory in the spherical harmonics domain for directional sources and directional receivers with higher order directivity patterns.

  14. New biometric modalities using internal physical characteristics

    NASA Astrophysics Data System (ADS)

    Mortenson, Juliana (Brooks)

    2010-04-01

    Biometrics is described as the science of identifying people based on physical characteristics such as their fingerprints, facial features, hand geometry, iris patterns, palm prints, or speech recognition. Notably, all of these physical characteristics are visible or detectable from the exterior of the body. These external characteristics can be lifted, photographed, copied or recorded for unauthorized access to a biometric system. Individual humans are unique internally, however, just as they are unique externally. New biometric modalities have been developed which identify people based on their unique internal characteristics. For example, "BoneprintsTM" use acoustic fields to scan the unique bone density pattern of a thumb pressed on a small acoustic sensor. Thanks to advances in piezoelectric materials the acoustic sensor can be placed in virtually any device such as a steering wheel, door handle, or keyboard. Similarly, "Imp-PrintsTM" measure the electrical impedance patterns of a hand to identify or verify a person's identity. Small impedance sensors can be easily embedded in devices such as smart cards, handles, or wall mounts. These internal biometric modalities rely on physical characteristics which are not visible or photographable, providing an added level of security. In addition, both the acoustic and impedance methods can be combined with physiologic measurements such as acoustic Doppler or impedance plethysmography, respectively. Added verification that the biometric pattern came from a living person can be obtained. These new biometric modalities have the potential to allay user concerns over protection of privacy, while providing a higher level of security.*

  15. A Temporal Pattern Mining Approach for Classifying Electronic Health Record Data

    PubMed Central

    Batal, Iyad; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos

    2013-01-01

    We study the problem of learning classification models from complex multivariate temporal data encountered in electronic health record systems. The challenge is to define a good set of features that are able to represent well the temporal aspect of the data. Our method relies on temporal abstractions and temporal pattern mining to extract the classification features. Temporal pattern mining usually returns a large number of temporal patterns, most of which may be irrelevant to the classification task. To address this problem, we present the Minimal Predictive Temporal Patterns framework to generate a small set of predictive and non-spurious patterns. We apply our approach to the real-world clinical task of predicting patients who are at risk of developing heparin induced thrombocytopenia. The results demonstrate the benefit of our approach in efficiently learning accurate classifiers, which is a key step for developing intelligent clinical monitoring systems. PMID:25309815

  16. Informational approach to the analysis of acoustic signals

    NASA Astrophysics Data System (ADS)

    Senkevich, Yuriy; Dyuk, Vyacheslav; Mishchenko, Mikhail; Solodchuk, Alexandra

    2017-10-01

    The example of linguistic processing of acoustic signals of a seismic event would be an information approach to the processing of non-stationary signals. The method for converting an acoustic signal into an information message is described by identifying repetitive self-similar patterns. The definitions of the event selection indicators in the symbolic recording of the acoustic signal are given. The results of processing an acoustic signal by a computer program realizing the processing of linguistic data are shown. Advantages and disadvantages of using software algorithms are indicated.

  17. Transparent model of temporal bone and vestibulocochlear organ made by 3D printing.

    PubMed

    Suzuki, Ryoji; Taniguchi, Naoto; Uchida, Fujio; Ishizawa, Akimitsu; Kanatsu, Yoshinori; Zhou, Ming; Funakoshi, Kodai; Akashi, Hideo; Abe, Hiroshi

    2018-01-01

    The vestibulocochlear organ is composed of tiny complex structures embedded in the petrous part of the temporal bone. Landmarks on the temporal bone surface provide the only orientation guide for dissection, but these need to be removed during the course of dissection, making it difficult to grasp the underlying three-dimensional structures, especially for beginners during gross anatomy classes. We report herein an attempt to produce a transparent three-dimensional-printed model of the human ear. En bloc samples of the temporal bone from donated cadavers were subjected to computed tomography (CT) scanning, and on the basis of the data, the surface temporal bone was reconstructed with transparent resin and the vestibulocochlear organ with white resin to create a 1:1.5 scale model. The carotid canal was stuffed with red cotton, and the sigmoid sinus and internal jugular vein were filled with blue clay. In the inner ear, the internal acoustic meatus, cochlea, and semicircular canals were well reconstructed in detail with white resin. The three-dimensional relationships of the semicircular canals, spiral turns of the cochlea, and internal acoustic meatus were well recognizable from every direction through the transparent surface resin. The anterior semicircular canal was obvious immediately beneath the arcuate eminence, and the topographical relationships of the vestibulocochlear organ and adjacent great vessels were easily discernible. We consider that this transparent temporal bone model will be a very useful aid for better understanding of the gross anatomy of the vestibulocochlear organ.

  18. A speech processing study using an acoustic model of a multiple-channel cochlear implant

    NASA Astrophysics Data System (ADS)

    Xu, Ying

    1998-10-01

    A cochlear implant is an electronic device designed to provide sound information for adults and children who have bilateral profound hearing loss. The task of representing speech signals as electrical stimuli is central to the design and performance of cochlear implants. Studies have shown that the current speech- processing strategies provide significant benefits to cochlear implant users. However, the evaluation and development of speech-processing strategies have been complicated by hardware limitations and large variability in user performance. To alleviate these problems, an acoustic model of a cochlear implant with the SPEAK strategy is implemented in this study, in which a set of acoustic stimuli whose psychophysical characteristics are as close as possible to those produced by a cochlear implant are presented on normal-hearing subjects. To test the effectiveness and feasibility of this acoustic model, a psychophysical experiment was conducted to match the performance of a normal-hearing listener using model- processed signals to that of a cochlear implant user. Good agreement was found between an implanted patient and an age-matched normal-hearing subject in a dynamic signal discrimination experiment, indicating that this acoustic model is a reasonably good approximation of a cochlear implant with the SPEAK strategy. The acoustic model was then used to examine the potential of the SPEAK strategy in terms of its temporal and frequency encoding of speech. It was hypothesized that better temporal and frequency encoding of speech can be accomplished by higher stimulation rates and a larger number of activated channels. Vowel and consonant recognition tests were conducted on normal-hearing subjects using speech tokens processed by the acoustic model, with different combinations of stimulation rate and number of activated channels. The results showed that vowel recognition was best at 600 pps and 8 activated channels, but further increases in stimulation rate and channel numbers were not beneficial. Manipulations of stimulation rate and number of activated channels did not appreciably affect consonant recognition. These results suggest that overall speech performance may improve by appropriately increasing stimulation rate and number of activated channels. Future revision of this acoustic model is necessary to provide more accurate amplitude representation of speech.

  19. Variations on a theme by chopin: relations between perception and production of timing in music.

    PubMed

    Repp, B H

    1998-06-01

    A note interonset interval (IOI) increment in mechanically timed music is more difficult to detect where expressive lengthening typically occurs in artistic performance. Experiment 1 showed this in an excerpt from a Chopin etude and extended the task to IOI decrement detection. A simple measure of variation in perceptual bias was derived that correlated highly with the average timing pattern of pianists' performances, more so than with acoustic surface properties of the music. Similar results, but decreasing correlations, were obtained in each of four subsequent experiments in which the music was simplified in stages. Although local psychoacoustic effects on time perception cannot be ruled out completely, the results suggest that musical structure (melodic-rhythmic grouping in particular) has temporal implications that are reflected not only in musicians' motor behavior but also in listeners' time-keeping abilities.

  20. Vibrational Profiling of Brain Tumors and Cells

    PubMed Central

    Nelson, Sultan L; Proctor, Dustin T; Ghasemloonia, Ahmad; Lama, Sanju; Zareinia, Kourosh; Ahn, Younghee; Al-Saiedy, Mustafa R; Green, Francis HY; Amrein, Matthias W; Sutherland, Garnette R

    2017-01-01

    This study reports vibration profiles of neuronal cells and tissues as well as brain tumor and neocortical specimens. A contact-free method and analysis protocol was designed to convert an atomic force microscope into an ultra-sensitive microphone with capacity to record and listen to live biological samples. A frequency of 3.4 Hz was observed for both cultured rat hippocampal neurons and tissues and vibration could be modulated pharmacologically. Malignant astrocytoma tissue samples obtained from operating room, transported in artificial cerebrospinal fluid, and tested within an hour, vibrated with a much different frequency profile and amplitude, compared to meningioma or lateral temporal cortex providing a quantifiable measurement to accurately distinguish the three tissues in real-time. Vibration signals were converted to audible sound waves by frequency modulation, thus demonstrating, acoustic patterns unique to meningioma, malignant astrocytoma and neocortex. PMID:28744324

  1. Music of the 7Ts: Predicting and Decoding Multivoxel fMRI Responses with Acoustic, Schematic, and Categorical Music Features

    PubMed Central

    Casey, Michael A.

    2017-01-01

    Underlying the experience of listening to music are parallel streams of auditory, categorical, and schematic qualia, whose representations and cortical organization remain largely unresolved. We collected high-field (7T) fMRI data in a music listening task, and analyzed the data using multivariate decoding and stimulus-encoding models. Twenty subjects participated in the experiment, which measured BOLD responses evoked by naturalistic listening to twenty-five music clips from five genres. Our first analysis applied machine classification to the multivoxel patterns that were evoked in temporal cortex. Results yielded above-chance levels for both stimulus identification and genre classification–cross-validated by holding out data from multiple of the stimuli during model training and then testing decoding performance on the held-out data. Genre model misclassifications were significantly correlated with those in a corresponding behavioral music categorization task, supporting the hypothesis that geometric properties of multivoxel pattern spaces underlie observed musical behavior. A second analysis employed a spherical searchlight regression analysis which predicted multivoxel pattern responses to music features representing melody and harmony across a large area of cortex. The resulting prediction-accuracy maps yielded significant clusters in the temporal, frontal, parietal, and occipital lobes, as well as in the parahippocampal gyrus and the cerebellum. These maps provide evidence in support of our hypothesis that geometric properties of music cognition are neurally encoded as multivoxel representational spaces. The maps also reveal a cortical topography that differentially encodes categorical and absolute-pitch information in distributed and overlapping networks, with smaller specialized regions that encode tonal music information in relative-pitch representations. PMID:28769835

  2. Shallow Water Reverberation Measurement and Prediction

    DTIC Science & Technology

    1994-06-01

    tool . The temporal signal processing consisted of a short-time Fourier transform spectral estimation method applied to data from a single hydrophone...The three-dimensional Hamiltonian Acoustic Ray-tracing Program for the Ocean (HARPO) was used as the primary propagation modeling tool . The temporal...summarizes the work completed and discusses lessons learned . Advice regarding future work to refine the present study will be provided. 6 our poiut source

  3. Layered Organization in the Coastal Ocean: 4-D Assessment of Thin Layer Structure, Dynamics and Impacts

    DTIC Science & Technology

    2009-09-30

    maintenance and dissipation of layers; (2) to understand the spatial coherence and spatial properties of thin layers in the coastal ocean (especially in...ORCAS profilers at K1 South and K2 had a Nortek ADV (Acoustic Doppler Velocity meter) for simultaneously measuring centimeter- scale currents and...year will be used to (1) detect the presence, intensity, thickness, temporal persistence, and spatial coherence of thin optical and acoustical layers

  4. Development of Biological Acoustic Impedance Microscope and its Error Estimation

    NASA Astrophysics Data System (ADS)

    Hozumi, Naohiro; Nakano, Aiko; Terauchi, Satoshi; Nagao, Masayuki; Yoshida, Sachiko; Kobayashi, Kazuto; Yamamoto, Seiji; Saijo, Yoshifumi

    This report deals with the scanning acoustic microscope for imaging cross sectional acoustic impedance of biological soft tissues. A focused acoustic beam was transmitted to the tissue object mounted on the "rear surface" of plastic substrate. A cerebellum tissue of rat and a reference material were observed at the same time under the same condition. As the incidence is not vertical, not only longitudinal wave but also transversal wave is generated in the substrate. The error in acoustic impedance assuming vertical incidence was estimated. It was proved that the error can precisely be compensated, if the beam pattern and acoustic parameters of coupling medium and substrate had been known.

  5. Vessel Noise Affects Beaked Whale Behavior: Results of a Dedicated Acoustic Response Study

    PubMed Central

    Pirotta, Enrico; Milor, Rachael; Quick, Nicola; Moretti, David; Di Marzio, Nancy; Tyack, Peter; Boyd, Ian; Hastie, Gordon

    2012-01-01

    Some beaked whale species are susceptible to the detrimental effects of anthropogenic noise. Most studies have concentrated on the effects of military sonar, but other forms of acoustic disturbance (e.g. shipping noise) may disrupt behavior. An experiment involving the exposure of target whale groups to intense vessel-generated noise tested how these exposures influenced the foraging behavior of Blainville’s beaked whales (Mesoplodon densirostris) in the Tongue of the Ocean (Bahamas). A military array of bottom-mounted hydrophones was used to measure the response based upon changes in the spatial and temporal pattern of vocalizations. The archived acoustic data were used to compute metrics of the echolocation-based foraging behavior for 16 targeted groups, 10 groups further away on the range, and 26 non-exposed groups. The duration of foraging bouts was not significantly affected by the exposure. Changes in the hydrophone over which the group was most frequently detected occurred as the animals moved around within a foraging bout, and their number was significantly less the closer the whales were to the sound source. Non-exposed groups also had significantly more changes in the primary hydrophone than exposed groups irrespective of distance. Our results suggested that broadband ship noise caused a significant change in beaked whale behavior up to at least 5.2 kilometers away from the vessel. The observed change could potentially correspond to a restriction in the movement of groups, a period of more directional travel, a reduction in the number of individuals clicking within the group, or a response to changes in prey movement. PMID:22880022

  6. Linking amphibian call structure to the environment: the interplay between phenotypic flexibility and individual attributes

    PubMed Central

    Arim, Matías; Narins, Peter M.

    2011-01-01

    The structure of the environment surrounding signal emission produces different patterns of degradation and attenuation. The expected adjustment of calls to ensure signal transmission in an environment was formalized in the acoustic adaptation hypothesis. Within this framework, most studies considered anuran calls as fixed attributes determined by local adaptations. However, variability in vocalizations as a product of phenotypic expression has also been reported. Empirical evidence supporting the association between environment and call structure has been inconsistent, particularly in anurans. Here, we identify a plausible causal structure connecting environment, individual attributes, and temporal and spectral adjustments as direct or indirect determinants of the observed variation in call attributes of the frog Hypsiboas pulchellus. For that purpose, we recorded the calls of 40 males in the field, together with vegetation density and other environmental descriptors of the calling site. Path analysis revealed a strong effect of habitat structure on the temporal parameters of the call, and an effect of site temperature conditioning the size of organisms calling at each site and thus indirectly affecting the dominant frequency of the call. Experimental habitat modification with a styrofoam enclosure yielded results consistent with field observations, highlighting the potential role of call flexibility on detected call patterns. Both, experimental and correlative results indicate the need to incorporate the so far poorly considered role of phenotypic plasticity in the complex connection between environmental structure and individual call attributes. PMID:22479134

  7. Experimental and numerical investigations of resonant acoustic waves in near-critical carbon dioxide.

    PubMed

    Hasan, Nusair; Farouk, Bakhtier

    2015-10-01

    Flow and transport induced by resonant acoustic waves in a near-critical fluid filled cylindrical enclosure is investigated both experimentally and numerically. Supercritical carbon dioxide (near the critical or the pseudo-critical states) in a confined resonator is subjected to acoustic field created by an electro-mechanical acoustic transducer and the induced pressure waves are measured by a fast response pressure field microphone. The frequency of the acoustic transducer is chosen such that the lowest acoustic mode propagates along the enclosure. For numerical simulations, a real-fluid computational fluid dynamics model representing the thermo-physical and transport properties of the supercritical fluid is considered. The simulated acoustic field in the resonator is compared with measurements. The formation of acoustic streaming structures in the highly compressible medium is revealed by time-averaging the numerical solutions over a given period. Due to diverging thermo-physical properties of supercritical fluid near the critical point, large scale oscillations are generated even for small sound field intensity. The strength of the acoustic wave field is found to be in direct relation with the thermodynamic state of the fluid. The effects of near-critical property variations and the operating pressure on the formation process of the streaming structures are also investigated. Irregular streaming patterns with significantly higher streaming velocities are observed for near-pseudo-critical states at operating pressures close to the critical pressure. However, these structures quickly re-orient to the typical Rayleigh streaming patterns with the increase operating pressure.

  8. The Coordinated Noninvasive Studies (CNS) Project. Phase 1. Appendices

    DTIC Science & Technology

    1991-12-01

    34Bandwidth of three-element patterns and its effect on relative ear advantages," to Acoustical Society of America, Cincinnati. Abstract: J Acoust Soc Amer...Acoustical Society of America, Cincinnati. Abstract: J Acoust Soc Amer 73: S60. "Cerebral metabolic effects of auditory stimulation," to Brain Breakfast...Laboratory, Los Alamos NM. "PET and the cortex: the effects of auditory stimulation on cerebral blood flow," to Department of Speech and Hearing Sciences

  9. Imaging of acoustic fields using optical feedback interferometry.

    PubMed

    Bertling, Karl; Perchoux, Julien; Taimre, Thomas; Malkin, Robert; Robert, Daniel; Rakić, Aleksandar D; Bosch, Thierry

    2014-12-01

    This study introduces optical feedback interferometry as a simple and effective technique for the two-dimensional visualisation of acoustic fields. We present imaging results for several pressure distributions including those for progressive waves, standing waves, as well as the diffraction and interference patterns of the acoustic waves. The proposed solution has the distinct advantage of extreme optical simplicity and robustness thus opening the way to a low cost acoustic field imaging system based on mass produced laser diodes.

  10. Broadscale Postseismic Gravity Change Following the 2011 Tohoku-Oki Earthquake and Implication for Deformation by Viscoelastic Relaxation and Afterslip

    NASA Technical Reports Server (NTRS)

    Han, Shin-Chan; Sauber, Jeanne; Pollitz, Fred

    2014-01-01

    The analysis of GRACE gravity data revealed post-seismic gravity increase by 6 micro-Gal over a 500 km scale within a couple of years after the 2011 Tohoku-Oki earthquake, which is nearly 40-50% of the co-seismic gravity change. It originates mostly from changes in the isotropic component corresponding to the M(sub rr) moment tensor element. The exponential decay with rapid change in a year and gradual change afterward is a characteristic temporal pattern. Both viscoelastic relaxation and afterslip models produce reasonable agreement with the GRACE free-air gravity observation, while their Bouguer gravity patterns and seafloor vertical deformations are distinctly different. The post-seismic gravity variation is best modeled by the bi-viscous relaxation with a transient and steady state viscosity of 10(exp 18) and 10(exp 19) Pa s, respectively, for the asthenosphere. Our calculated higher-resolution viscoelastic relaxation model, underlying the partially ruptured elastic lithosphere, yields the localized post-seismic subsidence above the hypocenter reported from the GPS-acoustic seafloor surveying.

  11. Use of ecoacoustics to determine biodiversity patterns across ecological gradients.

    PubMed

    Grant, Paul B C; Samways, Michael J

    2016-12-01

    The variety of local animal sounds characterizes a landscape. We used ecoacoustics to noninvasively assess the species richness of various biotopes typical of an ecofriendly forest plantation with diverse ecological gradients and both nonnative and indigenous vegetation. The reference area was an adjacent large World Heritage Site protected area (PA). All sites were in a global biodiversity hotspot. Our results showed how taxa segregated into various biotopes. We identified 65 singing species, including birds, frogs, crickets, and katydids. Large, natural, protected grassland sites in the PA had the highest mean acoustic diversity (14.1 species/site). Areas covered in nonnative timber or grass species were devoid of acoustic species. Sites grazed by native and domestic megaherbivores were fairly rich (5.1) in acoustic species but none were unique to this habitat type, where acoustic diversity was greater than in intensively managed grassland sites (0.04). Natural vegetation patches inside the plantation mosaic supported high mean acoustic diversity (indigenous forests 7.6, grasslands 8.0, wetlands 9.1), which increased as plant heterogeneity and patch size increased. Indigenous forest patches within the plantation mosaic contained a highly characteristic acoustic species assemblage, emphasizing their complementary contribution to local biodiversity. Overall, acoustic signals determined spatial biodiversity patterns and can be a useful tool for guiding conservation. © 2016 Society for Conservation Biology.

  12. Acoustofluidic waveguides for localized control of acoustic wavefront in microfluidics

    PubMed Central

    Bian, Yusheng; Guo, Feng; Yang, Shujie; Mao, Zhangming; Bachman, Hunter; Tang, Shi-Yang; Ren, Liqiang; Zhang, Bin; Gong, Jianying; Guo, Xiasheng

    2017-01-01

    The precise manipulation of acoustic fields in microfluidics is of critical importance for the realization of many biomedical applications. Despite the tremendous efforts devoted to the field of acoustofluidics during recent years, dexterous control, with an arbitrary and complex acoustic wavefront, in a prescribed, microscale region is still out of reach. Here, we introduce the concept of acoustofluidic waveguide, a three-dimensional compact configuration that is capable of locally guiding acoustic waves into a fluidic environment. Through comprehensive numerical simulations, we revealed the possibility of forming complex field patterns with defined pressure nodes within a highly localized, pre-determined region inside the microfluidic chamber. We also demonstrated the tunability of the acoustic field profile through controlling the size and shape of the waveguide geometry, as well as the operational frequency of the acoustic wave. The feasibility of the waveguide concept was experimentally verified via microparticle trapping and patterning. Our acoustofluidic waveguiding structures can be readily integrated with other microfluidic configurations and can be further designed into more complex types of passive acoustofluidic devices. The waveguide platform provides a promising alternative to current acoustic manipulation techniques and is useful in many applications such as single-cell analysis, point-of-care diagnostics, and studies of cell–cell interactions. PMID:29358901

  13. An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data

    PubMed Central

    Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos

    2015-01-01

    This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800

  14. Macroscale patterns of synchrony identify complex relationships among spatial and temporal ecosystem drivers

    USGS Publications Warehouse

    Lottig, Noah R.; Tan, Pang-Ning; Wagner, Tyler; Cheruvelil, Kendra Spence; Soranno, Patricia A.; Stanley, Emily H.; Scott, Caren E.; Stow, Craig A.; Yuan, Shuai

    2017-01-01

    Ecology has a rich history of studying ecosystem dynamics across time and space that has been motivated by both practical management needs and the need to develop basic ideas about pattern and process in nature. In situations in which both spatial and temporal observations are available, similarities in temporal behavior among sites (i.e., synchrony) provide a means of understanding underlying processes that create patterns over space and time. We used pattern analysis algorithms and data spanning 22–25 yr from 601 lakes to ask three questions: What are the temporal patterns of lake water clarity at sub‐continental scales? What are the spatial patterns (i.e., geography) of synchrony for lake water clarity? And, what are the drivers of spatial and temporal patterns in lake water clarity? We found that the synchrony of water clarity among lakes is not spatially structured at sub‐continental scales. Our results also provide strong evidence that the drivers related to spatial patterns in water clarity are not related to the temporal patterns of water clarity. This analysis of long‐term patterns of water clarity and possible drivers contributes to understanding of broad‐scale spatial patterns in the geography of synchrony and complex relationships between spatial and temporal patterns across ecosystems.

  15. Mining Recent Temporal Patterns for Event Detection in Multivariate Time Series Data

    PubMed Central

    Batal, Iyad; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos

    2015-01-01

    Improving the performance of classifiers using pattern mining techniques has been an active topic of data mining research. In this work we introduce the recent temporal pattern mining framework for finding predictive patterns for monitoring and event detection problems in complex multivariate time series data. This framework first converts time series into time-interval sequences of temporal abstractions. It then constructs more complex temporal patterns backwards in time using temporal operators. We apply our framework to health care data of 13,558 diabetic patients and show its benefits by efficiently finding useful patterns for detecting and diagnosing adverse medical conditions that are associated with diabetes. PMID:25937993

  16. Broadband classification and statistics of echoes from aggregations of fish measured by long-range, mid-frequency sonar.

    PubMed

    Jones, Benjamin A; Stanton, Timothy K; Colosi, John A; Gauss, Roger C; Fialkowski, Joseph M; Michael Jech, J

    2017-06-01

    For horizontal-looking sonar systems operating at mid-frequencies (1-10 kHz), scattering by fish with resonant gas-filled swimbladders can dominate seafloor and surface reverberation at long-ranges (i.e., distances much greater than the water depth). This source of scattering, which can be difficult to distinguish from other sources of scattering in the water column or at the boundaries, can add spatio-temporal variability to an already complex acoustic record. Sparsely distributed, spatially compact fish aggregations were measured in the Gulf of Maine using a long-range broadband sonar with continuous spectral coverage from 1.5 to 5 kHz. Observed echoes, that are at least 15 decibels above background levels in the horizontal-looking sonar data, are classified spectrally by the resonance features as due to swimbladder-bearing fish. Contemporaneous multi-frequency echosounder measurements (18, 38, and 120 kHz) and net samples are used in conjunction with physics-based acoustic models to validate this approach. Furthermore, the fish aggregations are statistically characterized in the long-range data by highly non-Rayleigh distributions of the echo magnitudes. These distributions are accurately predicted by a computationally efficient, physics-based model. The model accounts for beam-pattern and waveguide effects as well as the scattering response of aggregations of fish.

  17. Residency, site fidelity and habitat use of Atlantic cod (Gadus morhua) at an offshore wind farm using acoustic telemetry.

    PubMed

    Reubens, Jan T; Pasotti, Francesca; Degraer, Steven; Vincx, Magda

    2013-09-01

    Because offshore wind energy development is fast growing in Europe it is important to investigate the changes in the marine environment and how these may influence local biodiversity and ecosystem functioning. One of the species affected by these ecosystem changes is Atlantic cod (Gadus morhua), a heavily exploited, commercially important fish species. In this research we investigated the residency, site fidelity and habitat use of Atlantic cod on a temporal scale at windmill artificial reefs in the Belgian part of the North Sea. Acoustic telemetry was used and the Vemco VR2W position system was deployed to quantify the movement behaviour. In total, 22 Atlantic cod were tagged and monitored for up to one year. Many fish were present near the artificial reefs during summer and autumn, and demonstrated strong residency and high individual detection rates. When present within the study area, Atlantic cod also showed distinct habitat selectivity. We identified aggregation near the artificial hard substrates of the wind turbines. In addition, a clear seasonal pattern in presence was observed. The high number of fish present in summer and autumn alternated with a period of very low densities during the winter period. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Acoustic variability within and across German, French, and American English vowels: phonetic context effects.

    PubMed

    Strange, Winifred; Weber, Andrea; Levy, Erika S; Shafiro, Valeriy; Hisagi, Miwako; Nishi, Kanae

    2007-08-01

    Cross-language perception studies report influences of speech style and consonantal context on perceived similarity and discrimination of non-native vowels by inexperienced and experienced listeners. Detailed acoustic comparisons of distributions of vowels produced by native speakers of North German (NG), Parisian French (PF) and New York English (AE) in citation (di)syllables and in sentences (surrounded by labial and alveolar stops) are reported here. Results of within- and cross-language discriminant analyses reveal striking dissimilarities across languages in the spectral/temporal variation of coarticulated vowels. As expected, vocalic duration was most important in differentiating NG vowels; it did not contribute to PF vowel classification. Spectrally, NG long vowels showed little coarticulatory change, but back/low short vowels were fronted/raised in alveolar context. PF vowels showed greater coarticulatory effects overall; back and front rounded vowels were fronted, low and mid-low vowels were raised in both sentence contexts. AE mid to high back vowels were extremely fronted in alveolar contexts, with little change in mid-low and low long vowels. Cross-language discriminant analyses revealed varying patterns of spectral (dis)similarity across speech styles and consonantal contexts that could, in part, account for AE listeners' perception of German and French front rounded vowels, and "similar" mid-high to mid-low vowels.

  19. Robust Sensing of Approaching Vehicles Relying on Acoustic Cues

    PubMed Central

    Mizumachi, Mitsunori; Kaminuma, Atsunobu; Ono, Nobutaka; Ando, Shigeru

    2014-01-01

    The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle. PMID:24887038

  20. Transmission and scattering of acoustic energy in turbulent flows

    NASA Astrophysics Data System (ADS)

    Gaitonde, Datta; Unnikrishnan, S.

    2017-11-01

    Sound scattering and transmission in turbulent jets are explored through a control volume analysis of a Large-Eddy Simulation. The fluctuating momentum flux across any control surface is first split into its rotational turbulent ((ρu)'H) and the irrotational-isentropic acoustic ((ρu)'A) components using momentum potential theory (MPT). The former has low spatio-temporal coherence, while the latter exhibits a persistent wavepacket form. The energy variable, specifically, total fluctuating enthalpy, is also split into its turbulent and acoustic modes, HH' and HA' respectively. Scattering of acoustic energy is then (ρu)'HHA' , and transmission is (ρu)'AHA' . This facilitates a quantitative comparison of scattering versus transmission in the presence of acoustic energy sources, also obtained from MPT, in any turbulent scenario. The wavepacket converts stochastic sound sources into coherent sound radiation. Turbulent eddies are not only sources of sound, but also play a strong role in scattering, particularly near the lipline. The net acoustic flux from the jet is the transport of HA' by the wavepacket, whose axisymmetric and higher azimuthal modes contribute to downstream and sideline radiation respectively.

  1. Receptivity of Hypersonic Boundary Layers to Acoustic and Vortical Disturbances

    NASA Technical Reports Server (NTRS)

    Balakamar, P.; Kegerise, Michael A.

    2011-01-01

    Boundary layer receptivity to two-dimensional acoustic disturbances at different incidence angles and to vortical disturbances is investigated by solving the Navier-Stokes equations for Mach 6 flow over a 7deg half-angle sharp-tipped wedge and a cone. Higher order spatial and temporal schemes are employed to obtain the solution. The results show that the instability waves are generated in the leading edge region and that the boundary layer is much more receptive to slow acoustic waves as compared to the fast waves. It is found that the receptivity of the boundary layer on the windward side (with respect to the acoustic forcing) decreases when the incidence angle is increased from 0 to 30 degrees. However, the receptivity coefficient for the leeward side is found to vary relatively weakly with the incidence angle. The maximum receptivity is obtained when the wave incident angle is about 20 degrees. Vortical disturbances also generate unstable second modes, however the receptivity coefficients are smaller than that for the acoustic waves. Vortical disturbances first generate the fast acoustic modes and they switch to the slow mode near the continuous spectrum.

  2. All-optical in-depth detection of the acoustic wave emitted by a single gold nanorod

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Guillet, Yannick; Ravaine, Serge; Audoin, Bertrand

    2018-04-01

    A single gold nanorod dropped on the surface of a silica substrate is used as a transient optoacoustic source of gigahertz hypersounds. We demonstrate the all-optical detection of the as-generated acoustic wave front propagating in the silica substrate. For this purpose, time-resolved femtosecond pump-probe experiments are performed in a reflection configuration. The fundamental breathing mode of the nanorod is detected at 23 GHz by interferometry, and the longitudinal acoustic wave radiated in the silica substrate is detected by time-resolved Brillouin scattering. By tuning the optical probe wavelength from 750 to 900 nm, hypersounds with wavelengths of 260-315 nm are detected in the silica substrate, with corresponding acoustic frequencies in the range of 19-23 GHz. To confirm the origin of these hypersounds, we theoretically analyze the influence of the acoustic excitation spectrum on the temporal envelope of the transient reflectivity. This analysis proves that the acoustic wave detected in the silica substrate results from the excitation of the breathing mode of the nanorod. These results pave the way for performing local in-depth elastic nanoscopy.

  3. Spatio-Temporal Analysis of Urban Acoustic Environments with Binaural Psycho-Acoustical Considerations for IoT-Based Applications.

    PubMed

    Segura-Garcia, Jaume; Navarro-Ruiz, Juan Miguel; Perez-Solano, Juan J; Montoya-Belmonte, Jose; Felici-Castell, Santiago; Cobos, Maximo; Torres-Aranda, Ana M

    2018-02-26

    Sound pleasantness or annoyance perceived in urban soundscapes is a major concern in environmental acoustics. Binaural psychoacoustic parameters are helpful to describe generic acoustic environments, as it is stated within the ISO 12913 framework. In this paper, the application of a Wireless Acoustic Sensor Network (WASN) to evaluate the spatial distribution and the evolution of urban acoustic environments is described. Two experiments are presented using an indoor and an outdoor deployment of a WASN with several nodes using an Internet of Things (IoT) environment to collect audio data and calculate meaningful parameters such as the sound pressure level, binaural loudness and binaural sharpness. A chunk of audio is recorded in each node periodically with a microphone array and the binaural rendering is conducted by exploiting the estimated directional characteristics of the incoming sound by means of DOA estimation. Each node computes the parameters in a different location and sends the values to a cloud-based broker structure that allows spatial statistical analysis through Kriging techniques. A cross-validation analysis is also performed to confirm the usefulness of the proposed system.

  4. Listening in on Friction: Stick-Slip Acoustical Signatures in Velcro

    NASA Astrophysics Data System (ADS)

    Hurtado Parra, Sebastian; Morrow, Leslie; Radziwanowski, Miles; Angiolillo, Paul

    2013-03-01

    The onset of kinetic friction and the possible resulting stick-slip motion remain mysterious phenomena. Moreover, stick-slip dynamics are typically accompanied by acoustic bursts that occur temporally with the slip event. The dry sliding dynamics of the hook-and-loop system, as exemplified by Velcro, manifest stick-slip behavior along with audible bursts that are easily micrphonically collected. Synchronized measurements of the friction force and acoustic emissions were collected as hooked Velcro was driven at constant velocity over a bed of looped Velcro in an anechoic chamber. Not surprising, the envelope of the acoustic bursts maps well onto the slip events of the friction force time series and the intensity of the bursts trends with the magnitude of the difference of the friction force during a stick-slip event. However, the analysis of the acoustic emission can serve as a sensitive tool for revealing some of the hidden details of the evolution of the transition from static to kinetic friction. For instance, small acoustic bursts are seen prior to the Amontons-Coulomb threshold, signaling precursor events prior to the onset of macroscopically observed motion. Preliminary spectral analysis of the acoustic emissions including intensity-frequency data will be presented.

  5. Speaker compensation for local perturbation of fricative acoustic feedback.

    PubMed

    Casserly, Elizabeth D

    2011-04-01

    Feedback perturbation studies of speech acoustics have revealed a great deal about how speakers monitor and control their productions of segmental (e.g., formant frequencies) and non-segmental (e.g., pitch) linguistic elements. The majority of previous work, however, overlooks the role of acoustic feedback in consonant production and makes use of acoustic manipulations that effect either entire utterances or the entire acoustic signal, rather than more temporally and phonetically restricted alterations. This study, therefore, seeks to expand the feedback perturbation literature by examining perturbation of consonant acoustics that is applied in a time-restricted and phonetically specific manner. The spectral center of the alveopalatal fricative [∫] produced in vowel-fricative-vowel nonwords was incrementally raised until it reached the potential for [s]-like frequencies, but the characteristics of high-frequency energy outside the target fricative remained unaltered. An "offline," more widely accessible signal processing method was developed to perform this manipulation. The local feedback perturbation resulted in changes to speakers' fricative production that were more variable, idiosyncratic, and restricted than the compensation seen in more global acoustic manipulations reported in the literature. Implications and interpretations of the results, as well as future directions for research based on the findings, are discussed.

  6. Acoustic detail guides attention allocation in a selective listening task.

    PubMed

    Wöstmann, Malte; Schröger, Erich; Obleser, Jonas

    2015-05-01

    The flexible allocation of attention enables us to perceive and behave successfully despite irrelevant distractors. How do acoustic challenges influence this allocation of attention, and to what extent is this ability preserved in normally aging listeners? Younger and healthy older participants performed a masked auditory number comparison while EEG was recorded. To vary selective attention demands, we manipulated perceptual separability of spoken digits from a masking talker by varying acoustic detail (temporal fine structure). Listening conditions were adjusted individually to equalize stimulus audibility as well as the overall level of performance across participants. Accuracy increased, and response times decreased with more acoustic detail. The decrease in response times with more acoustic detail was stronger in the group of older participants. The onset of the distracting speech masker triggered a prominent contingent negative variation (CNV) in the EEG. Notably, CNV magnitude decreased parametrically with increasing acoustic detail in both age groups. Within identical levels of acoustic detail, larger CNV magnitude was associated with improved accuracy. Across age groups, neuropsychological markers further linked early CNV magnitude directly to individual attentional capacity. Results demonstrate for the first time that, in a demanding listening task, instantaneous acoustic conditions guide the allocation of attention. Second, such basic neural mechanisms of preparatory attention allocation seem preserved in healthy aging, despite impending sensory decline.

  7. Picosecond Acoustics in Single Quantum Wells of Cubic GaN /(Al ,Ga )N

    NASA Astrophysics Data System (ADS)

    Czerniuk, T.; Ehrlich, T.; Wecker, T.; As, D. J.; Yakovlev, D. R.; Akimov, A. V.; Bayer, M.

    2017-01-01

    A picosecond acoustic pulse is used to study the photoelastic interaction in single zinc-blende GaN /AlxGa1 -x N quantum wells. We use an optical time-resolved pump-probe setup and demonstrate that tuning the photon energy to the quantum well's lowest electron-hole transition makes the experiment sensitive to the quantum well only. Because of the small width, its temporal and spatial resolution allows us to track the few-picosecond-long transit of the acoustic pulse. We further deploy a model to analyze the unknown photoelastic coupling strength of the quantum well for different photon energies and find good agreement with the experiments.

  8. Strain and ground-motion monitoring at magmatic areas: ultra-long and ultra-dense networks using fibre optic sensing systems

    NASA Astrophysics Data System (ADS)

    Jousset, Philippe; Reinsch, Thomas; Henninges, Jan; Blanck, Hanna; Ryberg, Trond

    2016-04-01

    The fibre optic distributed acoustic sensing technology (DAS) is a "new" sensing system for exploring earth crustal elastic properties and monitoring both strain and seismic waves with unprecedented acquisition characteristics. The DAS technology principle lies in sending successive and coherent pulses of light in an optical fibre and measuring the back-scattered light issued from elastic scattering at random defaults within the fibre. The read-out unit includes an interferometer, which measures light interference patterns continuously. The changes are related to the distance between such defaults and therefore the strain within the fibre can be detected. Along an optical fibre, DAS can be used to acquire acoustic signals with a high spatial (every meter over kilometres) and high temporal resolution (thousand of Hz). Fibre optic technologies were, up to now, mainly applied in perimeter surveillance applications and pipeline monitoring and in boreholes. Previous experiments in boreholes have shown that the DAS technology is well suited for probing subsurface elastic properties, showing new ways for cheaper VSP investigations of the Earth crust. Here, we demonstrate that a cable deployed at ground surface can also help in exploring subsurface properties at crustal scale and monitor earthquake activity in a volcanic environment. Within the framework of the EC funded project IMAGE, we observed a >15 km-long fibre optic cable at the surface connected to a DAS read-out unit. Acoustic data was acquired continuously for 9 days. Hammer shots were performed along the surface cable in order to locate individual acoustic traces and calibrate the spatial distribution of the acoustic information. During the monitoring period both signals from on- and offshore explosive sources and natural seismic events could be recorded. We compare the fibre optic data to conventional seismic records from a dense seismic network deployed on Reykjanes. We show that we can probe and monitor earth crust subsurface with dense acquisition of the ground motion, both in space and in time and over a broad band frequency range.

  9. Influence of viscoelastic property on laser-generated surface acoustic waves in coating-substrate systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Hongxiang; Faculty of Science, Jiangsu University, Zhenjiang 212013; Zhang Shuyi

    2011-04-01

    Taking account of the viscoelasticity of materials, the pulsed laser generation of surface acoustic waves in coating-substrate systems has been investigated quantitatively by using the finite element method. The displacement spectra of the surface acoustic waves have been calculated in frequency domain for different coating-substrate systems, in which the viscoelastic properties of the coatings and substrates are considered separately. Meanwhile, the temporal displacement waveforms have been obtained by applying inverse fast Fourier transforms. The numerical results of the normal surface displacements are presented for different configurations: a single plate, a slow coating on a fast substrate, and a fast coatingmore » on a slow substrate. The influences of the viscoelastic properties of the coating and the substrate on the attenuation of the surface acoustic waves have been studied. In addition, the influence of the coating thickness on the attenuation of the surface acoustic waves has been also investigated in detail.« less

  10. Models and observations of foam coverage and bubble content in the surf zone

    NASA Astrophysics Data System (ADS)

    Kirby, J. T.; Shi, F.; Holman, R. A.

    2010-12-01

    Optical and acoustical observations and communications are hampered in the nearshore by the presence of bubbles and foam generated by breaking waves. Bubble clouds in the water column provide a highly variable (both spatially and temporally) obstacle to direct acoustic and optical paths. Persistent foam riding on the water surface creates a primary occlusion of optical penetration into the water column. In an effort to better understand and predict the level of bubble and foam content in the surfzone, we have been pursuing the development of a detailed phase resolved model of fluid and gaseous components of the water column, using a Navier-Stokes/VOF formulation extended to include a multiphase description of polydisperse bubble populations. This sort of modeling provides a detailed description of large scale turbulent structures and associated bubble transport mechanisms under breaking wave crests. The modeling technique is too computationally intensive, however, to provide a wider-scale description of large surfzone regions. In order to approach the larger scale problem, we are developing a model for spatial and temporal distribution of foam and bubbles within the framework of a Boussinesq model. The basic numerical framework for the code is described by Shi et al (2010, this conference). Bubble effects are incorporated both in the mass and momentum balances for weakly dispersive, fully nonlinear waves, with spatial and temporal bubble distributions parameterized based on the VOF modeling and measurements and tied to the computed rate of dissipation of energy during breaking. A model of a foam layer on the water surface is specified using a shallow water formulation. Foam mass conservation includes source and sink terms representing outgassing of the water column, direct foam generation due to surface agitation, and erosion due to bubble bursting. The foam layer motion in the plane of the water surface arises due to a balance of drag forces due to wind and water column motion. Preliminary steps to calibrate and verify the resulting models will be taken based on results to be collected during the Surf Zone Optics experiment at Duck, NC in September 2010. Initial efforts will focus on an examination of breaking wave patterns and persistent foam distributions, using ARGUS imagery.

  11. Avalanche correlations in the martensitic transition of a Cu-Zn-Al shape memory alloy: analysis of acoustic emission and calorimetry.

    PubMed

    Baró, Jordi; Martín-Olalla, José-María; Romero, Francisco Javier; Gallardo, María Carmen; Salje, Ekhard K H; Vives, Eduard; Planes, Antoni

    2014-03-26

    The existence of temporal correlations during the intermittent dynamics of a thermally driven structural phase transition is studied in a Cu-Zn-Al alloy. The sequence of avalanches is observed by means of two techniques: acoustic emission and high sensitivity calorimetry. Both methods reveal the existence of event clustering in a way that is equivalent to the Omori correlations between aftershocks in earthquakes as are commonly used in seismology.

  12. Mixing in Shear Coaxial Jets with and without Acoustics (Briefing Charts)

    DTIC Science & Technology

    2012-05-21

    and heat transfer fluctuations in a rocket engine – Irreparable damage can occur in əs • Combustion Instability caused a 4-yr delay in the...common choice for cryogenic liquid rocket engines • Interactions of transverse acoustics with injector’s own modes and mixing needs to be understood...Pr = 0.44 • LAR-thin , Pr = 0.44, J = 0.5 POM 2 POM 1 Average Snapshot Power Spectral Densities (PSD) of Temporal Coefficients of POMs 1 and 2

  13. Acoustic Observation of the Time Dependence of the Roughness of Sandy Seafloors

    DTIC Science & Technology

    2009-11-25

    relations between acoustic and roughness temporal correlations are developed and applied. Manuscript received April 23, 2007; revised June 04. 2008 and...Fourier transform of the relief function as follows: (F(K2, t2)F*(Klt h)) = W(KU tu t2)6{Ki - K2) . (6) The presence of the Dirac delta function is only...appropriate if /(R, t) is stationary with infinite extent in the spatial coordi- nates. As a result of the windowing assumed here, the delta func

  14. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    PubMed

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  15. Capturing the Acoustic Radiation Pattern of Strombolian Eruptions using Infrasound Sensors Aboard a Tethered Aerostat, Yasur Volcano, Vanuatu

    NASA Astrophysics Data System (ADS)

    Jolly, Arthur D.; Matoza, Robin S.; Fee, David; Kennedy, Ben M.; Iezzi, Alexandra M.; Fitzgerald, Rebecca H.; Austin, Allison C.; Johnson, Richard

    2017-10-01

    We obtained an unprecedented view of the acoustic radiation from persistent strombolian volcanic explosions at Yasur volcano, Vanuatu, from the deployment of infrasound sensors attached to a tethered aerostat. While traditional ground-based infrasound arrays may sample only a small portion of the eruption pressure wavefield, we were able to densely sample angular ranges of 200° in azimuth and 50° in takeoff angle by placing the aerostat at 38 tethered loiter positions around the active vent. The airborne data joined contemporaneously collected ground-based infrasound and video recordings over the period 29 July to 1 August 2016. We observe a persistent variation in the acoustic radiation pattern with average eastward directed root-mean-square pressures more than 2 times larger than in other directions. The observed radiation pattern may be related to both path effects from the crater walls, and source directionality.

  16. Analysis of the acoustic spectral signature of prosthetic heart valves in patients experiencing atrial fibrillation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, D.D.; Jones, H.E.

    1994-05-06

    Prosthetic heart valves have increased the life span of many patients with life threatening heart conditions. These valves have proven extremely reliable adding years to what would have been weeks to a patient`s life. Prosthetic valves, like the heart however, can suffer from this constant work load. A small number of valves have experienced structural fractures of the outlet strut due to fatigue. To study this problem a non-intrusive method to classify valves has been developed. By extracting from an acoustic signal the opening sounds which directly contain information from the outlet strut and then developing features which are suppliedmore » to an adaptive classification scheme (neural network) the condition of the valve can be determined. The opening sound extraction process has proved to be a classification problem itself. Due to the uniqueness of each heart and the occasional irregularity of the acoustic pattern it is often questionable as to the integrity of a given signal (beat), especially one occurring during an irregular beat pattern. A common cause of these irregular patterns is a condition known as atrial fibrillation, a prevalent arrhythmia among patients with prosthetic hear valves. Atrial fibrillation is suspected when the ECG shows no obvious P-waves. The atria do not contract and relax correctly to help contribute to ventricular filling during a normal cardiac cycle. Sometimes this leads to irregular patterns in the acoustic data. This study compares normal beat patterns to irregular patterns of the same heart. By analyzing the spectral content of the beats it can be determined whether or not these irregular patterns can contribute to the classification of a heart valve or if they should be avoided. The results have shown that the opening sounds which occur during irregular beat patterns contain the same spectral information as the opening which occur during a normal beat pattern of the same heart and these beats can be used for classification.« less

  17. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing.

    PubMed

    Ölçer, İbrahim; Öncü, Ahmet

    2017-06-05

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry ( ϕ -OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ -OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ -OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems.

  18. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing

    PubMed Central

    Ölçer, İbrahim; Öncü, Ahmet

    2017-01-01

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry (ϕ-OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ-OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ-OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems. PMID:28587240

  19. Frequency modulation detection in cochlear implant subjects

    NASA Astrophysics Data System (ADS)

    Chen, Hongbin; Zeng, Fan-Gang

    2004-10-01

    Frequency modulation (FM) detection was investigated in acoustic and electric hearing to characterize cochlear-implant subjects' ability to detect dynamic frequency changes and to assess the relative contributions of temporal and spectral cues to frequency processing. Difference limens were measured for frequency upward sweeps, downward sweeps, and sinusoidal FM as a function of standard frequency and modulation rate. In electric hearing, factors including electrode position and stimulation level were also studied. Electric hearing data showed that the difference limen increased monotonically as a function of standard frequency regardless of the modulation type, the modulation rate, the electrode position, and the stimulation level. In contrast, acoustic hearing data showed that the difference limen was nearly a constant as a function of standard frequency. This difference was interpreted to mean that temporal cues are used only at low standard frequencies and at low modulation rates. At higher standard frequencies and modulation rates, the reliance on the place cue is increased, accounting for the better performance in acoustic hearing than for electric hearing with single-electrode stimulation. The present data suggest a speech processing strategy that encodes slow frequency changes using lower stimulation rates than those typically employed by contemporary cochlear-implant speech processors. .

  20. Cross-language comparisons of contextual variation in the production and perception of vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred

    2005-04-01

    In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.

  1. Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.

    PubMed

    Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison

    2016-11-01

    MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.

  2. An Assessment of Stream Confluence Flow Dynamics using Large Scale Particle Image Velocimetry Captured from Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Lewis, Q. W.; Rhoads, B. L.

    2017-12-01

    The merging of rivers at confluences results in complex three-dimensional flow patterns that influence sediment transport, bed morphology, downstream mixing, and physical habitat conditions. The capacity to characterize comprehensively flow at confluences using traditional sensors, such as acoustic Doppler velocimeters and profiles, is limited by the restricted spatial resolution of these sensors and difficulties in measuring velocities simultaneously at many locations within a confluence. This study assesses two-dimensional surficial patterns of flow structure at a small stream confluence in Illinois, USA, using large scale particle image velocimetry (LSPIV) derived from videos captured by unmanned aerial systems (UAS). The method captures surface velocity patterns at high spatial and temporal resolution over multiple scales, ranging from the entire confluence to details of flow within the confluence mixing interface. Flow patterns at high momentum ratio are compared to flow patterns when the two incoming flows have nearly equal momentum flux. Mean surface flow patterns during the two types of events provide details on mean patterns of surface flow in different hydrodynamic regions of the confluence and on changes in these patterns with changing momentum flux ratio. LSPIV data derived from the highest resolution imagery also reveal general characteristics of large-scale vortices that form along the shear layer between the flows during the high-momentum ratio event. The results indicate that the use of LSPIV and UAS is well-suited for capturing in detail mean surface patterns of flow at small confluences, but that characterization of evolving turbulent structures is limited by scale considerations related to structure size, image resolution, and camera instability. Complementary methods, including camera platforms mounted at fixed positions close to the water surface, provide opportunities to accurately characterize evolving turbulent flow structures in confluences.

  3. Signals from the deep: Spatial and temporal acoustic occurrence of beaked whales off western Ireland.

    PubMed

    Kowarski, Katie; Delarue, Julien; Martin, Bruce; O'Brien, Joanne; Meade, Rossa; Ó Cadhla, Oliver; Berrow, Simon

    2018-01-01

    Little is known of the spatio-temporal occurrence of beaked whales off western Ireland, limiting the ability of Regulators to implement appropriate management and conservation measures. To address this knowledge gap, static acoustic monitoring was carried out using eight fixed bottom-mounted autonomous acoustic recorders: four from May to December 2015 on Ireland's northern slope and four from March to November 2016 on the western and southern slopes. Recorders ran for 205 to 230 days, resulting in 4.09 TB of data sampled at 250 kHz which could capture beaked whale acoustic signals. Zero-crossing-based automated detectors identified beaked whale clicks. A sample of detections was manually validated to evaluate and optimize detector performance. Analysis confirmed the occurrence of Sowerby's and Cuvier's beaked whales and Northern bottlenose whales. Northern bottlenose whale clicks occurred in late summer and autumn, but were too few to allow further analysis. Cuvier's and Sowerby's clicks occurred at all stations throughout the monitoring period. There was a significant effect of month and station (latitude) on the mean daily number of click detections for both species. Cuvier's clicks were more abundant at lower latitudes while Sowerby's were greater at higher latitudes, particularly in the spring, suggesting a spatial segregation between species, possibly driven by prey preference. Cuvier's occurrence increased in late autumn 2015 off northwest Porcupine Bank, a region of higher relative occurrence for each species. Seismic airgun shots, with daily sound exposure levels as high as 175 dB re 1 μPa2·s, did not appear to impact the mean daily number of Cuvier's or Sowerby's beaked whale click detections. This work provides insight into the significance of Irish waters for beaked whales and highlights the importance of using acoustics for beaked whale monitoring.

  4. Design and Characterization of an Acoustically and Structurally Matched 3-D-Printed Model for Transcranial Ultrasound Imaging.

    PubMed

    Bai, Chen; Ji, Meiling; Bouakaz, Ayache; Zong, Yujin; Wan, Mingxi

    2018-05-01

    For investigating human transcranial ultrasound imaging (TUI) through the temporal bone, an intact human skull is needed. Since it is complex and expensive to obtain one, it requires that experiments are performed without excision or abrasion of the skull. Besides, to mimic blood circulation for the vessel target, cellulose tubes generally fit the vessel simulation with straight linear features. These issues, which limit experimental studies, can be overcome by designing a 3-D-printed skull model with acoustic and dimensional properties that match a real skull and a vessel model with curve and bifurcation. First, the optimal printing material which matched a real skull in terms of the acoustic attenuation coefficient and sound propagation velocity was identified at 2-MHz frequency, i.e., 7.06 dB/mm and 2168.71 m/s for the skull while 6.98 dB/mm and 2114.72 m/s for the printed material, respectively. After modeling, the average thickness of the temporal bone in the printed skull was about 1.8 mm, while it was to 1.7 mm in the real skull. Then, a vascular phantom was designed with 3-D-printed vessels of low acoustic attenuation (0.6 dB/mm). It was covered with a porcine brain tissue contained within a transparent polyacrylamide gel. After characterizing the acoustic consistency, based on the designed skull model and vascular phantom, vessels with inner diameters of 1 and 0.7 mm were distinguished by resolution enhanced imaging with low frequency. Measurements and imaging results proved that the model and phantom are authentic and viable alternatives, and will be of interest for TUI, high intensity focused ultrasound, or other therapy studies.

  5. Temporal patterns of the use of non-prescribed drugs.

    PubMed

    Sinnett, E R; Morris, J B

    1977-12-01

    Licit and illicit non-prescribed drugs, regardless of their classification, are used in a common temporal pattern with the possible exceptions of caffeine and cocaine. The temporal patterns of drug use are highly correlated with the nationwide temporal pattern of TV watching, suggesting a pleasure-oriented, recreational use. The peak times for substance use and abuse may have implications for the delivery of professional or paraprofessional services.

  6. Temporal motifs reveal homophily, gender-specific patterns, and group talk in call sequences.

    PubMed

    Kovanen, Lauri; Kaski, Kimmo; Kertész, János; Saramäki, Jari

    2013-11-05

    Recent studies on electronic communication records have shown that human communication has complex temporal structure. We study how communication patterns that involve multiple individuals are affected by attributes such as sex and age. To this end, we represent the communication records as a colored temporal network where node color is used to represent individuals' attributes, and identify patterns known as temporal motifs. We then construct a null model for the occurrence of temporal motifs that takes into account the interaction frequencies and connectivity between nodes of different colors. This null model allows us to detect significant patterns in call sequences that cannot be observed in a static network that uses interaction frequencies as link weights. We find sex-related differences in communication patterns in a large dataset of mobile phone records and show the existence of temporal homophily, the tendency of similar individuals to participate in communication patterns beyond what would be expected on the basis of their average interaction frequencies. We also show that temporal patterns differ between dense and sparse neighborhoods in the network. Because also this result is independent of interaction frequencies, it can be seen as an extension of Granovetter's hypothesis to temporal networks.

  7. Temporal motifs reveal homophily, gender-specific patterns, and group talk in call sequences

    PubMed Central

    Kovanen, Lauri; Kaski, Kimmo; Kertész, János; Saramäki, Jari

    2013-01-01

    Recent studies on electronic communication records have shown that human communication has complex temporal structure. We study how communication patterns that involve multiple individuals are affected by attributes such as sex and age. To this end, we represent the communication records as a colored temporal network where node color is used to represent individuals’ attributes, and identify patterns known as temporal motifs. We then construct a null model for the occurrence of temporal motifs that takes into account the interaction frequencies and connectivity between nodes of different colors. This null model allows us to detect significant patterns in call sequences that cannot be observed in a static network that uses interaction frequencies as link weights. We find sex-related differences in communication patterns in a large dataset of mobile phone records and show the existence of temporal homophily, the tendency of similar individuals to participate in communication patterns beyond what would be expected on the basis of their average interaction frequencies. We also show that temporal patterns differ between dense and sparse neighborhoods in the network. Because also this result is independent of interaction frequencies, it can be seen as an extension of Granovetter’s hypothesis to temporal networks. PMID:24145424

  8. Non-Linear Acoustic Concealed Weapons Detector

    DTIC Science & Technology

    2006-05-01

    signature analysis 8 the interactions of the beams with concealed objects. The Khokhlov- Zabolotskaya-Kuznetsov ( KZK ) equation is the most widely used...Hamilton developed a finite difference method based on the KZK equation to model pulsed acoustic emissions from axial symmetric sources. Using a...College of William & Mary, we have developed a simulation code using the KZK equation to model non-linear acoustic beams and visualize beam patterns

  9. Acoustic and Perceptual Effects of Dysarthria in Greek with a Focus on Lexical Stress

    NASA Astrophysics Data System (ADS)

    Papakyritsis, Ioannis

    The field of motor speech disorders in Greek is substantially underresearched. Additionally, acoustic studies on lexical stress in dysarthria are generally very rare (Kim et al. 2010). This dissertation examined the acoustic and perceptual effects of Greek dysarthria focusing on lexical stress. Additional possibly deviant speech characteristics were acoustically analyzed. Data from three dysarthric participants and matched controls was analyzed using a case study design. The analysis of lexical stress was based on data drawn from a single word repetition task that included pairs of disyllabic words differentiated by stress location. This data was acoustically analyzed in terms of the use of the acoustic cues for Greek stress. The ability of the dysarthric participants to signal stress in single words was further assessed in a stress identification task carried out by 14 naive Greek listeners. Overall, the acoustic and perceptual data indicated that, although all three dysarthric speakers presented with some difficulty in the patterning of stressed and unstressed syllables, each had different underlying problems that gave rise to quite distinct patterns of deviant speech characteristics. The atypical use of lexical stress cues in Anna's data obscured the prominence relations of stressed and unstressed syllables to the extent that the position of lexical stress was usually not perceptually transparent. Chris and Maria on the other hand, did not have marked difficulties signaling lexical stress location, although listeners were not 100% successful in the stress identification task. For the most part, Chris' atypical phonation patterns and Maria's very slow rate of speech did not interfere with lexical stress signaling. The acoustic analysis of the lexical stress cues was generally in agreement with the participants' performance in the stress identification task. Interestingly, in all three dysarthric participants, but more so in Anna, targets stressed on the 1st syllable were more impervious to error judgments of lexical stress location than targets stressed on the 2nd syllable, although the acoustic metrics did not always suggest a more appropriate use of lexical stress cues in 1st syllable position. The findings contribute to our limited knowledge of the speech characteristics of dysarthria across different languages.

  10. Diel activity of Gulf of Mexico sturgeon in a northwest Florida bay

    USGS Publications Warehouse

    Wrege, B.M.; Duncan, M.S.; Isely, J.J.

    2011-01-01

    In this paper, we assess patterns in activity of Gulf of Mexico sturgeon Acipenser oxyrinchus desotoi over a 24-h period in the Pensacola bay system, Florida. Although seasonal migration of sturgeon is well documented, little information is available pertaining to daily variation in activity. We surgically implanted 58 Gulf sturgeon with acoustic transmitters in the Escambia (n=26), Yellow (n=8), Blackwater (n=12) and Choctawhatchee rivers (n=12) in June, July, September and October 2005. Gulf sturgeon location was monitored using an array of 56 fixed-station acoustic receivers. The relationship between frequency of Gulf sturgeon observations recorded on all acoustic receivers and time of day for all seasons combined indicated a strong diel activity pattern. Gulf sturgeon were frequently detected at night in all seasons with the exception of summer. Consecutive hourly observations indicated lateral movement of Gulf sturgeon between independent acoustic receivers on 15% of all observations of individuals. The use of an acoustic receiver array not only provides continuous data within a defined area, but also provides insight into nocturnal behavior of Gulf sturgeon not previously identified. ?? 2011 Blackwell Verlag, Berlin.

  11. Postural stability of preoperative acoustic neuroma patients assessed by sway magnetometry: are they unsteady?

    PubMed

    Collins, Melanie M; Johnson, Ian J M; Clifford, Elaine; Birchall, John P; O'Donoghue, Gerald M

    2003-04-01

    The objective was to evaluate the preoperative postural stability of acoustic neuroma patients using sway magnetometry. Prospective two-center study. Fifty-one patients (mean age, 53 years) diagnosed with unilateral acoustic neuroma on magnetic resonance imaging at two tertiary referral centers were studied. Preoperatively, each patient had sway patterns (with eyes open and with eyes closed, and standing on foam) recorded for 120 seconds by sway magnetometry. Path length for 30 seconds was calculated. The Romberg coefficient (path length with eyes open divided by path length with eyes closed) was calculated. Forty-four percent of patients had abnormal path lengths with eyes open, and 49% with eyes closed. The Romberg coefficients were significantly lower than normal (P <.001; 95% CI, 0.19-0.87). Mean Romberg coefficient was 0.59 (normal value = 0.73), and all patients had a coefficient of less than 1. Half of preoperative acoustic neuroma patients are unsteady, exhibiting abnormal sway patterns based on path length measurements. The increase in sway path length demonstrable in normal subjects with eyes closed was significantly exaggerated in patients with acoustic neuroma.

  12. Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha

    PubMed Central

    Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim

    2015-01-01

    The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms. PMID:26538641

  13. Spoken language achieves robustness and evolvability by exploiting degeneracy and neutrality.

    PubMed

    Winter, Bodo

    2014-10-01

    As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.

  14. Measurement of thin films using very long acoustic wavelengths

    NASA Astrophysics Data System (ADS)

    Clement, G. T.; Nomura, H.; Adachi, H.; Kamakura, T.

    2013-12-01

    A procedure for measuring material thickness by means of necessarily long acoustic wavelengths is examined. The approach utilizes a temporal phase lag caused by the impulse time of wave momentum transferred through a thin layer that is much denser than its surrounding medium. In air, it is predicted that solid or liquid layers below approximately 1/2000 of the acoustic wavelength will exhibit a phase shift with an arctangent functional dependence on thickness and layer density. The effect is verified for thin films on the scale of 10 μm using audible frequency sound (7 kHz). Soap films as thin as 100 nm are then measured using 40 kHz air ultrasound. The method's potential for imaging applications is demonstrated by combining the approach with near-field holography, resulting in reconstructions with sub-wavelength resolution in both the depth and lateral directions. Potential implications at very high and very low acoustic frequencies are discussed.

  15. Acoustic occurrence detection of a newly recorded Indo-Pacific humpback dolphin population in waters southwest of Hainan Island, China.

    PubMed

    Dong, Lijun; Liu, Mingming; Dong, Jianchen; Li, Songhai

    2017-11-01

    In 2014, Indo-Pacific humpback dolphins were recorded for the first time in waters southwest of Hainan Island, China. In this paper, the temporal occurrence of Indo-Pacific humpback dolphins in this region was detected by stationary passive acoustic monitoring. During the 130-day observation period (from January to July 2016), 1969 click trains produced by Indo-Pacific humpback dolphins were identified, and 262 ten-minute recording bins contained echolocation click trains of dolphins, of which 70.9% were at night and 29.1% were during the day. A diurnal rhythm with a nighttime peak in acoustic detections was found. Passive acoustic detections indicated that the Indo-Pacific humpback dolphins frequently occurred in this area and were detected mainly at night. This information may be relevant to conservation efforts for these dolphins in the near future.

  16. Listening to the Deep: live monitoring of ocean noise and cetacean acoustic signals.

    PubMed

    André, M; van der Schaar, M; Zaugg, S; Houégnigan, L; Sánchez, A M; Castell, J V

    2011-01-01

    The development and broad use of passive acoustic monitoring techniques have the potential to help assessing the large-scale influence of artificial noise on marine organisms and ecosystems. Deep-sea observatories have the potential to play a key role in understanding these recent acoustic changes. LIDO (Listening to the Deep Ocean Environment) is an international project that is allowing the real-time long-term monitoring of marine ambient noise as well as marine mammal sounds at cabled and standalone observatories. Here, we present the overall development of the project and the use of passive acoustic monitoring (PAM) techniques to provide the scientific community with real-time data at large spatial and temporal scales. Special attention is given to the extraction and identification of high frequency cetacean echolocation signals given the relevance of detecting target species, e.g. beaked whales, in mitigation processes, e.g. during military exercises. Copyright © 2011. Published by Elsevier Ltd.

  17. Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)

    2002-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  18. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  19. Acoustic metamaterials with circular sector cavities and programmable densities.

    PubMed

    Akl, W; Elsabbagh, A; Baz, A

    2012-10-01

    Considerable interest has been devoted to the development of various classes of acoustic metamaterials that can control the propagation of acoustical wave energy throughout fluid domains. However, all the currently exerted efforts are focused on studying passive metamaterials with fixed material properties. In this paper, the emphasis is placed on the development of a class of composite one-dimensional acoustic metamaterials with effective densities that are programmed to adapt to any prescribed pattern along the metamaterial. The proposed acoustic metamaterial is composed of a periodic arrangement of cell structures, in which each cell consists of a circular sector cavity bounded by actively controlled flexible panels to provide the capability for manipulating the overall effective dynamic density. The theoretical analysis of this class of multilayered composite active acoustic metamaterials (CAAMM) is presented and the theoretical predictions are determined for a cascading array of fluid cavities coupled to flexible piezoelectric active boundaries forming the metamaterial domain with programmable dynamic density. The stiffness of the piezoelectric boundaries is electrically manipulated to control the overall density of the individual cells utilizing the strong coupling with the fluid domain and using direct acoustic pressure feedback. The interaction between the neighboring cells of the composite metamaterial is modeled using a lumped-parameter approach. Numerical examples are presented to demonstrate the performance characteristics of the proposed CAAMM and its potential for generating prescribed spatial and spectral patterns of density variation.

  20. Surface Acoustic Waves Grant Superior Spatial Control of Cells Embedded in Hydrogel Fibers.

    PubMed

    Lata, James P; Guo, Feng; Guo, Jinshan; Huang, Po-Hsun; Yang, Jian; Huang, Tony Jun

    2016-10-01

    By exploiting surface acoustic waves and a coupling layer technique, cells are patterned within a photosensitive hydrogel fiber to mimic physiological cell arrangement in tissues. The aligned cell-polymer matrix is polymerized with short exposure to UV light and the fiber is extracted. These patterned cell fibers are manipulated into simple and complex architectures, demonstrating feasibility for tissue-engineering applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Acoustic measurements of the spatial and temporal structure of the near-bottom boundary layer in the 1990-1991 STRESS experiment

    NASA Astrophysics Data System (ADS)

    Lynch, James F.; Irish, James D.; Gross, Thomas F.; Wiberg, Patricia L.; Newhall, Arthur E.; Traykovski, Peter A.; Warren, Joseph D.

    1997-08-01

    As part of the 1990-1991 Sediment TRansport Events on Shelves and Slopes (STRESS) experiment, a 5 MHz Acoustic BackScatter System (ABSS) was deployed in 90 m of water to measure vertical profiles of near-bottom suspended sediment concentration. By looking at the vertical profile of concentration from 0 to 50 cm above bottom (cmab) with 1 cm vertical resolution, the ABSS was able to examine the detailed structure of the bottom boundary layer created by combined wave and current stresses. The acoustic profiles clearly showed the wave-current boundary layer, which extends to (order) 10 cmab. The profiles also showed evidence of an "intermediate" boundary layer, also influenced by combined wave and current stresses, just above the wave-current boundary layer. This paper examines the boundary-layer structure by comparing acoustic data obtained by the authors to a 1-D eddy viscosity model formulation. Specifically, these data are compared to a simple extension of the Grant-Glenn-Madsen model formulation. Also of interest is the appearance of apparently 3-D "advective plume" structures in these data. This is an interesting feature in a site which was initially chosen to be a good example of (temporally averaged) 1-D bottom boundary-layer dynamics. Computer modeling and sector-scanning sonar images are presented to justify the plausibility of observing 3-D structure at the STRESS site. 1997 Elsevier Science Ltd

  2. Resection planning for robotic acoustic neuroma surgery

    NASA Astrophysics Data System (ADS)

    McBrayer, Kepra L.; Wanna, George B.; Dawant, Benoit M.; Balachandran, Ramya; Labadie, Robert F.; Noble, Jack H.

    2016-03-01

    Acoustic neuroma surgery is a procedure in which a benign mass is removed from the Internal Auditory Canal (IAC). Currently this surgical procedure requires manual drilling of the temporal bone followed by exposure and removal of the acoustic neuroma. This procedure is physically and mentally taxing to the surgeon. Our group is working to develop an Acoustic Neuroma Surgery Robot (ANSR) to perform the initial drilling procedure. Planning the ANSR's drilling region using pre-operative CT requires expertise and around 35 minutes' time. We propose an approach for automatically producing a resection plan for the ANSR that would avoid damage to sensitive ear structures and require minimal editing by the surgeon. We first compute an atlas-based segmentation of the mastoid section of the temporal bone, refine it based on the position of anatomical landmarks, and apply a safety margin to the result to produce the automatic resection plan. In experiments with CTs from 9 subjects, our automated process resulted in a resection plan that was verified to be safe in every case. Approximately 2 minutes were required in each case for the surgeon to verify and edit the plan to permit functional access to the IAC. We measured a mean Dice coefficient of 0.99 and surface error of 0.08 mm between the final and automatically proposed plans. These preliminary results indicate that our approach is a viable method for resection planning for the ANSR and drastically reduces the surgeon's planning effort.

  3. Effects of singing training on the speaking voice of voice majors.

    PubMed

    Mendes, Ana P; Brown, W S; Rothman, Howard B; Sapienza, Christine

    2004-03-01

    This longitudinal study gathered data with regard to the question: Does singing training have an effect on the speaking voice? Fourteen voice majors (12 females and two males; age range 17 to 20 years) were recorded once a semester for four consecutive semesters, while sustaining vowels and reading the "Rainbow Passage." Acoustic measures included speaking fundamental frequency (SFF) and sound pressure level (SLP). Perturbation measures included jitter, shimmer, and harmonic-to-noise ratio. Temporal measures included sentence, consonant, and diphthong durations. Results revealed that, as the number of semesters increased, the SFF increased while jitter and shimmer slightly decreased. Repeated measure analysis, however, indicated that none of the acoustic, temporal, or perturbation differences were statistically significant. These results confirm earlier cross-sectional studies that compared singers with nonsingers, in that singing training mostly affects the singing voice and rarely the speaking voice.

  4. Listeners modulate temporally selective attention during natural speech processing

    PubMed Central

    Astheimer, Lori B.; Sanders, Lisa D.

    2009-01-01

    Spatially selective attention allows for the preferential processing of relevant stimuli when more information than can be processed in detail is presented simultaneously at distinct locations. Temporally selective attention may serve a similar function during speech perception by allowing listeners to allocate attentional resources to time windows that contain highly relevant acoustic information. To test this hypothesis, event-related potentials were compared in response to attention probes presented in six conditions during a narrative: concurrently with word onsets, beginning 50 and 100 ms before and after word onsets, and at random control intervals. Times for probe presentation were selected such that the acoustic environments of the narrative were matched for all conditions. Linguistic attention probes presented at and immediately following word onsets elicited larger amplitude N1s than control probes over medial and anterior regions. These results indicate that native speakers selectively process sounds presented at specific times during normal speech perception. PMID:18395316

  5. Accurate Sound Localization in Reverberant Environments is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    PubMed Central

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072

  6. Depth estimation of laser glass drilling based on optical differential measurements of acoustic response

    NASA Astrophysics Data System (ADS)

    Gorodesky, Niv; Ozana, Nisan; Berg, Yuval; Dolev, Omer; Danan, Yossef; Kotler, Zvi; Zalevsky, Zeev

    2016-09-01

    We present the first steps of a device suitable for characterization of complex 3D micro-structures. This method is based on an optical approach allowing extraction and separation of high frequency ultrasonic sound waves induced to the analyzed samples. Rapid, non-destructive characterization of 3D micro-structures are limited in terms of geometrical features and optical properties of the sample. We suggest a method which is based on temporal tracking of secondary speckle patterns generated when illuminating a sample with a laser probe while applying known periodic vibration using an ultrasound transmitter. In this paper we investigated lasers drilled through glass vias. The large aspect ratios of the vias possess a challenge for traditional microscopy techniques in analyzing depth and taper profiles of the vias. The correlation of the amplitude vibrations to the vias depths is experimentally demonstrated.

  7. Analysis of the inversion monitoring capabilities of a monostatic acoustic radar in complex terrain. [Tennessee River Valley

    NASA Technical Reports Server (NTRS)

    Koepf, D.; Frost, W.

    1981-01-01

    A qualitative interpretation of the records from a monostatic acoustic radar is presented. This is achieved with the aid of airplane, helicopter, and rawinsonde temperature soundings. The diurnal structure of a mountain valley circulation pattern is studied with the use of two acoustic radars, one located in the valley and one on the downwind ridge. The monostatic acoustic radar was found to be sufficiently accurate in locating the heights of the inversions and the mixed layer depth to warrant use by industry even in complex terrain.

  8. Rapid calculation of acoustic fields from arbitrary continuous-wave sources.

    PubMed

    Treeby, Bradley E; Budisky, Jakub; Wise, Elliott S; Jaros, Jiri; Cox, B T

    2018-01-01

    A Green's function solution is derived for calculating the acoustic field generated by phased array transducers of arbitrary shape when driven by a single frequency continuous wave excitation with spatially varying amplitude and phase. The solution is based on the Green's function for the homogeneous wave equation expressed in the spatial frequency domain or k-space. The temporal convolution integral is solved analytically, and the remaining integrals are expressed in the form of the spatial Fourier transform. This allows the acoustic pressure for all spatial positions to be calculated in a single step using two fast Fourier transforms. The model is demonstrated through several numerical examples, including single element rectangular and spherically focused bowl transducers, and multi-element linear and hemispherical arrays.

  9. Measurements of the power spectrum and dispersion relation of self-excited dust acoustic waves

    NASA Astrophysics Data System (ADS)

    Nosenko, V.; Zhdanov, S. K.; Kim, S.-H.; Heinrich, J.; Merlino, R. L.; Morfill, G. E.

    2009-12-01

    The spectrum of spontaneously excited dust acoustic waves was measured. The waves were observed with high temporal resolution using a fast video camera operating at 1000 frames per second. The experimental system was a suspension of micron-size kaolin particles in the anode region of a dc discharge in argon. Wave activity was found at frequencies as high as 450 Hz. At high wave numbers, the wave dispersion relation was acoustic-like (frequency proportional to wave number). At low wave numbers, the wave frequency did not tend to zero, but reached a cutoff frequency instead. The cutoff value declined with distance from the anode. We ascribe the observed cutoff to the particle confinement in this region.

  10. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity.

    PubMed

    Buxton, Rachel; McKenna, Megan F; Clapp, Mary; Meyer, Erik; Stabenau, Erik; Angeloni, Lisa M; Crooks, Kevin; Wittemyer, George

    2018-04-20

    Passive acoustic monitoring has the potential to be a powerful approach for assessing biodiversity across large spatial and temporal scales. However, extracting meaningful information from recordings can be prohibitively time consuming. Acoustic indices offer a relatively rapid method for processing acoustic data and are increasingly used to characterize biological communities. We examine the ability of acoustic indices to predict the diversity and abundance of biological sounds within recordings. First we reviewed the acoustic index literature and found that over 60 indices have been applied to a range of objectives with varying success. We then implemented a subset of the most successful indices on acoustic data collected at 43 sites in temperate terrestrial and tropical marine habitats across the continental U.S., developing a predictive model of the diversity of animal sounds observed in recordings. For terrestrial recordings, random forest models using a suite of acoustic indices as covariates predicted Shannon diversity, richness, and total number of biological sounds with high accuracy (R 2 > = 0.94, mean squared error MSE < = 170.2). Among the indices assessed, roughness, acoustic activity, and acoustic richness contributed most to the predictive ability of models. Performance of index models was negatively impacted by insect, weather, and anthropogenic sounds. For marine recordings, random forest models predicted Shannon diversity, richness, and total number of biological sounds with low accuracy (R 2 < = 0.40, MSE > = 195), indicating that alternative methods are necessary in marine habitats. Our results suggest that using a combination of relevant indices in a flexible model can accurately predict the diversity of biological sounds in temperate terrestrial acoustic recordings. Thus, acoustic approaches could be an important contribution to biodiversity monitoring in some habitats in the face of accelerating human-caused ecological change. This article is protected by copyright. All rights reserved.

  11. Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs

    PubMed Central

    Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter

    2011-01-01

    In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal. PMID:21969562

  12. Taking advantage of acoustic inhomogeneities in photoacoustic measurements

    NASA Astrophysics Data System (ADS)

    Da Silva, Anabela; Handschin, Charles; Riedinger, Christophe; Piasecki, Julien; Mensah, Serge; Litman, Amélie; Akhouayri, Hassan

    2016-03-01

    Photoacoustic offers promising perspectives in probing and imaging subsurface optically absorbing structures in biological tissues. The optical uence absorbed is partly dissipated into heat accompanied with microdilatations that generate acoustic pressure waves, the intensity which is related to the amount of fluuence absorbed. Hence the photoacoustic signal measured offers access, at least potentially, to a local monitoring of the absorption coefficient, in 3D if tomographic measurements are considered. However, due to both the diffusing and absorbing nature of the surrounding tissues, the major part of the uence is deposited locally at the periphery of the tissue, generating an intense acoustic pressure wave that may hide relevant photoacoustic signals. Experimental strategies have been developed in order to measure exclusively the photoacoustic waves generated by the structure of interest (orthogonal illumination and detection). Temporal or more sophisticated filters (wavelets) can also be applied. However, the measurement of this primary acoustic wave carries a lot of information about the acoustically inhomogeneous nature of the medium. We propose a protocol that includes the processing of this primary intense acoustic wave, leading to the quantification of the surrounding medium sound speed, and, if appropriate to an acoustical parametric image of the heterogeneities. This information is then included as prior knowledge in the photoacoustic reconstruction scheme to improve the localization and quantification.

  13. Perceptual weighting of individual and concurrent cues for sentence intelligibility: Frequency, envelope, and fine structure

    PubMed Central

    Fogerty, Daniel

    2011-01-01

    The speech signal may be divided into frequency bands, each containing temporal properties of the envelope and fine structure. For maximal speech understanding, listeners must allocate their perceptual resources to the most informative acoustic properties. Understanding this perceptual weighting is essential for the design of assistive listening devices that need to preserve these important speech cues. This study measured the perceptual weighting of young normal-hearing listeners for the envelope and fine structure in each of three frequency bands for sentence materials. Perceptual weights were obtained under two listening contexts: (1) when each acoustic property was presented individually and (2) when multiple acoustic properties were available concurrently. The processing method was designed to vary the availability of each acoustic property independently by adding noise at different levels. Perceptual weights were determined by correlating a listener’s performance with the availability of each acoustic property on a trial-by-trial basis. Results demonstrated that weights were (1) equal when acoustic properties were presented individually and (2) biased toward envelope and mid-frequency information when multiple properties were available. Results suggest a complex interaction between the available acoustic properties and the listening context in determining how best to allocate perceptual resources when listening to speech in noise. PMID:21361454

  14. Sound Waves Levitate Substrates

    NASA Technical Reports Server (NTRS)

    Lee, M. C.; Wang, T. G.

    1982-01-01

    System recently tested uses acoustic waves to levitate liquid drops, millimeter-sized glass microballoons, and other objects for coating by vapor deposition or capillary attraction. Cylindrical contactless coating/handling facility employs a cylindrical acoustic focusing radiator and a tapered reflector to generate a specially-shaped standing wave pattern. Article to be processed is captured by the acoustic force field under the reflector and moves as reflector is moved to different work stations.

  15. Processing of Natural Echolocation Sequences in the Inferior Colliculus of Seba’s Fruit Eating Bat, Carollia perspicillata

    PubMed Central

    Kordes, Sebastian; Kössl, Manfred

    2017-01-01

    Abstract For the purpose of orientation, echolocating bats emit highly repetitive and spatially directed sonar calls. Echoes arising from call reflections are used to create an acoustic image of the environment. The inferior colliculus (IC) represents an important auditory stage for initial processing of echolocation signals. The present study addresses the following questions: (1) how does the temporal context of an echolocation sequence mimicking an approach flight of an animal affect neuronal processing of distance information to echo delays? (2) how does the IC process complex echolocation sequences containing echo information from multiple objects (multiobject sequence)? Here, we conducted neurophysiological recordings from the IC of ketamine-anaesthetized bats of the species Carollia perspicillata and compared the results from the IC with the ones from the auditory cortex (AC). Neuronal responses to an echolocation sequence was suppressed when compared to the responses to temporally isolated and randomized segments of the sequence. The neuronal suppression was weaker in the IC than in the AC. In contrast to the cortex, the time course of the acoustic events is reflected by IC activity. In the IC, suppression sharpens the neuronal tuning to specific call-echo elements and increases the signal-to-noise ratio in the units’ responses. When presenting multiple-object sequences, despite collicular suppression, the neurons responded to each object-specific echo. The latter allows parallel processing of multiple echolocation streams at the IC level. Altogether, our data suggests that temporally-precise neuronal responses in the IC could allow fast and parallel processing of multiple acoustic streams. PMID:29242823

  16. Processing of Natural Echolocation Sequences in the Inferior Colliculus of Seba's Fruit Eating Bat, Carollia perspicillata.

    PubMed

    Beetz, M Jerome; Kordes, Sebastian; García-Rosales, Francisco; Kössl, Manfred; Hechavarría, Julio C

    2017-01-01

    For the purpose of orientation, echolocating bats emit highly repetitive and spatially directed sonar calls. Echoes arising from call reflections are used to create an acoustic image of the environment. The inferior colliculus (IC) represents an important auditory stage for initial processing of echolocation signals. The present study addresses the following questions: (1) how does the temporal context of an echolocation sequence mimicking an approach flight of an animal affect neuronal processing of distance information to echo delays? (2) how does the IC process complex echolocation sequences containing echo information from multiple objects (multiobject sequence)? Here, we conducted neurophysiological recordings from the IC of ketamine-anaesthetized bats of the species Carollia perspicillata and compared the results from the IC with the ones from the auditory cortex (AC). Neuronal responses to an echolocation sequence was suppressed when compared to the responses to temporally isolated and randomized segments of the sequence. The neuronal suppression was weaker in the IC than in the AC. In contrast to the cortex, the time course of the acoustic events is reflected by IC activity. In the IC, suppression sharpens the neuronal tuning to specific call-echo elements and increases the signal-to-noise ratio in the units' responses. When presenting multiple-object sequences, despite collicular suppression, the neurons responded to each object-specific echo. The latter allows parallel processing of multiple echolocation streams at the IC level. Altogether, our data suggests that temporally-precise neuronal responses in the IC could allow fast and parallel processing of multiple acoustic streams.

  17. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis

    PubMed Central

    Evans, Samuel; Davis, Matthew H.

    2015-01-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. PMID:26157026

  18. Measures of voiced frication for automatic classification

    NASA Astrophysics Data System (ADS)

    Jackson, Philip J. B.; Jesus, Luis M. T.; Shadle, Christine H.; Pincas, Jonathan

    2004-05-01

    As an approach to understanding the characteristics of the acoustic sources in voiced fricatives, it seems apt to draw on knowledge of vowels and voiceless fricatives, which have been relatively well studied. However, the presence of both phonation and frication in these mixed-source sounds offers the possibility of mutual interaction effects, with variations across place of articulation. This paper examines the acoustic and articulatory consequences of these interactions and explores automatic techniques for finding parametric and statistical descriptions of these phenomena. A reliable and consistent set of such acoustic cues could be used for phonetic classification or speech recognition. Following work on devoicing of European Portuguese voiced fricatives [Jesus and Shadle, in Mamede et al. (eds.) (Springer-Verlag, Berlin, 2003), pp. 1-8]. and the modulating effect of voicing on frication [Jackson and Shadle, J. Acoust. Soc. Am. 108, 1421-1434 (2000)], the present study focuses on three types of information: (i) sequences and durations of acoustic events in VC transitions, (ii) temporal, spectral and modulation measures from the periodic and aperiodic components of the acoustic signal, and (iii) voicing activity derived from simultaneous EGG data. Analysis of interactions observed in British/American English and European Portuguese speech corpora will be compared, and the principal findings discussed.

  19. Chirped or time modulated excitation compared to short pulses for photoacoustic imaging in acoustic attenuating media

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Motz, C.; Lang, O.; Berer, T.; Huemer, M.

    2018-02-01

    In photoacoustic imaging, optically generated acoustic waves transport the information about embedded structures to the sample surface. Usually, short laser pulses are used for the acoustic excitation. Acoustic attenuation increases for higher frequencies, which reduces the bandwidth and limits the spatial resolution. One could think of more efficient waveforms than single short pulses, such as pseudo noise codes, chirped, or harmonic excitation, which could enable a higher information-transfer from the samples interior to its surface by acoustic waves. We used a linear state space model to discretize the wave equation, such as the Stoke's equation, but this method could be used for any other linear wave equation. Linear estimators and a non-linear function inversion were applied to the measured surface data, for onedimensional image reconstruction. The proposed estimation method allows optimizing the temporal modulation of the excitation laser such that the accuracy and spatial resolution of the reconstructed image is maximized. We have restricted ourselves to one-dimensional models, as for higher dimensions the one-dimensional reconstruction, which corresponds to the acoustic wave without attenuation, can be used as input for any ultrasound imaging method, such as back-projection or time-reversal method.

  20. Spatio-Temporal Analysis of Urban Acoustic Environments with Binaural Psycho-Acoustical Considerations for IoT-Based Applications

    PubMed Central

    Montoya-Belmonte, Jose; Cobos, Maximo; Torres-Aranda, Ana M.

    2018-01-01

    Sound pleasantness or annoyance perceived in urban soundscapes is a major concern in environmental acoustics. Binaural psychoacoustic parameters are helpful to describe generic acoustic environments, as it is stated within the ISO 12913 framework. In this paper, the application of a Wireless Acoustic Sensor Network (WASN) to evaluate the spatial distribution and the evolution of urban acoustic environments is described. Two experiments are presented using an indoor and an outdoor deployment of a WASN with several nodes using an Internet of Things (IoT) environment to collect audio data and calculate meaningful parameters such as the sound pressure level, binaural loudness and binaural sharpness. A chunk of audio is recorded in each node periodically with a microphone array and the binaural rendering is conducted by exploiting the estimated directional characteristics of the incoming sound by means of DOA estimation. Each node computes the parameters in a different location and sends the values to a cloud-based broker structure that allows spatial statistical analysis through Kriging techniques. A cross-validation analysis is also performed to confirm the usefulness of the proposed system. PMID:29495407

  1. Acoustic detections of summer and winter whales at Arctic gateways in the Atlantic and Pacific Oceans

    NASA Astrophysics Data System (ADS)

    Stafford, K.; Laidre, K. L.; Moore, S. E.

    2016-02-01

    Changes in sea ice phenology have been profound in regions north of arctic gateways, where the seasonal open-water period has increased by 1.5-3 months over the past 30 years. This has resulted in changes to the Arctic ecosystem, including increased primary productivity, changing food web structure, and opening of new habitat. In the "new normal" Arctic, ice obligate species such as ice seals and polar bears may fare poorly under reduced sea ice while sub-arctic "summer" whales (fin and humpback) are poised to inhabit new seasonal ice-free habitats in the Arctic. We examined the spatial and seasonal occurrence of summer and "winter" (bowhead) whales from September through December by deploying hydrophones in three Arctic gateways: Bering, Davis and Fram Straits. Acoustic occurrence of the three species was compared with decadal-scale changes in seasonal sea ice. In all three Straits, fin whale acoustic detections extended from summer to late autumn. Humpback whales showed the same pattern in Bering and Davis Straits, singing into November and December, respectively. Bowhead whale detections generally began after the departure of the summer whales and continued through the winter. In all three straits, summer whales occurred in seasons and regions that used to be ice-covered. This is likely due to both increased available habitat from sea ice reductions and post-whaling population recoveries. At present, in the straits examined here, there is spatial, but not temporal, overlap between summer and winter whales. In a future with further seasonal sea ice reductions, however, increased competition for resources between sub-Arctic and Arctic species may arise to the detriment of winter whales.

  2. Novel underwater soundscape: acoustic repertoire of plainfin midshipman fish.

    PubMed

    McIver, Eileen L; Marchaterre, Margaret A; Rice, Aaron N; Bass, Andrew H

    2014-07-01

    Toadfishes are among the best-known groups of sound-producing (vocal) fishes and include species commonly known as toadfish and midshipman. Although midshipman have been the subject of extensive investigation of the neural mechanisms of vocalization, this is the first comprehensive, quantitative analysis of the spectro-temporal characters of their acoustic signals and one of the few for fishes in general. Field recordings of territorial, nest-guarding male midshipman during the breeding season identified a diverse vocal repertoire composed of three basic sound types that varied widely in duration, harmonic structure and degree of amplitude modulation (AM): 'hum', 'grunt' and 'growl'. Hum duration varied nearly 1000-fold, lasting for minutes at a time, with stable harmonic stacks and little envelope modulation throughout the sound. By contrast, grunts were brief, ~30-140 ms, broadband signals produced both in isolation and repetitively as a train of up to 200 at intervals of ~0.5-1.0 s. Growls were also produced alone or repetitively, but at variable intervals of the order of seconds with durations between those of grunts and hums, ranging 60-fold from ~200 ms to 12 s. Growls exhibited prominent harmonics with sudden shifts in pulse repetition rate and highly variable AM patterns, unlike the nearly constant AM of grunt trains and flat envelope of hums. Behavioral and neurophysiological studies support the hypothesis that each sound type's unique acoustic signature contributes to signal recognition mechanisms. Nocturnal production of these sounds against a background chorus dominated constantly for hours by a single sound type, the multi-harmonic hum, reveals a novel underwater soundscape for fish. © 2014. Published by The Company of Biologists Ltd.

  3. The acoustic repertoire of the Atlantic Forest Rocket Frog and its consequences for taxonomy and conservation (Allobates, Aromobatidae)

    PubMed Central

    Forti, Lucas Rodriguez; da Silva, Thaís Renata Ávila; Toledo, Luís Felipe

    2017-01-01

    Abstract The use of acoustic signals is a common characteristic of most anuran species to mediate intraspecific communication. Besides many social purposes, one of the main functions of these signals is species recognition. For this reason, this phenotypic trait is normally applied to taxonomy or to construct evolutionary relationship hypotheses. Here the acoustic repertoire of five populations of the genus Allobates from the Brazilian Atlantic Forest are presented for the first time, on a vulnerable to extinction Neotropical taxon. The description of males’ advertisement and aggressive calls and a female call emitted in a courtship context are presented. In addition, the advertisement calls of individuals from distinct geographical regions were compared. Differences in frequency range and note duration may imply in taxonomic rearrangements of these populations, once considered distinct species, and more recently, proposed as a single species, Allobates olfersioides. Calls of the male from the state of Rio de Janeiro do not overlap spectrally with calls of males from northern populations, while the shorter notes emitted by males from Alagoas also distinguishes this population from the remaining southern populations. Therefore, it is likely that at least two of the junior synonyms should be revalidated. Similarities among male advertisement and female calls are generally reported in other anuran species; these calls may have evolved from a preexisting vocalization common to both sexes. Male aggressive calls were different from both the male advertisement and female calls, since it was composed by a longer and multi-pulsed note. Aggressive and advertisement calls generally have similar dominant frequencies, but they have temporal distinctions. Such patterns were corroborated with the Atlantic Forest Rocket Frogs. These findings may support future research addressing the taxonomy of the group, behavioral evolution, and amphibian conservation. PMID:29133990

  4. The acoustic repertoire of the Atlantic Forest Rocket Frog and its consequences for taxonomy and conservation (Allobates, Aromobatidae).

    PubMed

    Forti, Lucas Rodriguez; da Silva, Thaís Renata Ávila; Toledo, Luís Felipe

    2017-01-01

    The use of acoustic signals is a common characteristic of most anuran species to mediate intraspecific communication. Besides many social purposes, one of the main functions of these signals is species recognition. For this reason, this phenotypic trait is normally applied to taxonomy or to construct evolutionary relationship hypotheses. Here the acoustic repertoire of five populations of the genus Allobates from the Brazilian Atlantic Forest are presented for the first time, on a vulnerable to extinction Neotropical taxon. The description of males' advertisement and aggressive calls and a female call emitted in a courtship context are presented. In addition, the advertisement calls of individuals from distinct geographical regions were compared. Differences in frequency range and note duration may imply in taxonomic rearrangements of these populations, once considered distinct species, and more recently, proposed as a single species, Allobates olfersioides . Calls of the male from the state of Rio de Janeiro do not overlap spectrally with calls of males from northern populations, while the shorter notes emitted by males from Alagoas also distinguishes this population from the remaining southern populations. Therefore, it is likely that at least two of the junior synonyms should be revalidated. Similarities among male advertisement and female calls are generally reported in other anuran species; these calls may have evolved from a preexisting vocalization common to both sexes. Male aggressive calls were different from both the male advertisement and female calls, since it was composed by a longer and multi-pulsed note. Aggressive and advertisement calls generally have similar dominant frequencies, but they have temporal distinctions. Such patterns were corroborated with the Atlantic Forest Rocket Frogs. These findings may support future research addressing the taxonomy of the group, behavioral evolution, and amphibian conservation.

  5. Drake Passage-Antarctic Peninsula Ecosystem Research: Spring and Fall Zooplankton and Seabird Assemblages

    NASA Astrophysics Data System (ADS)

    Loeb, V. J.; Chereskin, T. K.; Santora, J. A.

    2016-02-01

    Acoustic Doppler Current Profiler (ADCP) records from multiple "L.M. Gould" supply transits of Drake Passage from 1999 to present demonstrate spatial and temporal (diel, seasonal, annual and longer term) variability in acoustics backscattering. Acoustics backscattering strength in the upper water column corresponds to zooplankton and nekton biomass that relates to seabird and mammal distribution and abundance. Recent results indicate that interannual variability in backscattering strength is correlated to climate indices. The interpretation of these ecological changes is severely limited because the sound scatterers previously had not been identified and linkages to upper trophic level predators are unknown. Net-tows, depth-referenced underwater videography and seabird/mammal visual surveys during spring 2014 and fall 2015 transits provided information on the taxonomic-size composition, distribution, aggregation and behavioral patterns of dominant ADCP backscattering organisms and relate these to higher level predator populations. The distribution and composition of zooplankton species and seabird assemblages conformed to four biogeographic regions. Areas of elevated secondary productivity coincided with increased ADCP target strength with highest concentrations off Patagonia and Antarctic Peninsula and secondary peaks around the Polar Front. Small sized zooplankton taxa dominated north of the Polar Front while larger taxa dominated to the south. Regionally important prey items likely are: copepods, amphipods, small euphausiids and fish (Patagonia); copepods, myctophids, shelled pteropods and squid (Polar Front); large euphausiids (Antarctic Peninsula). This study demonstrates that biological observations during "L.M. Gould" supply transits greatly augment the value of routinely collected ADCP and XBT data and provide basic information relevant to the impacts of climate change in this rapidly warming portion of the Southern Ocean

  6. Speech perception with combined electric-acoustic stimulation and bilateral cochlear implants in a multisource noise field.

    PubMed

    Rader, Tobias; Fastl, Hugo; Baumann, Uwe

    2013-01-01

    The aim of the study was to measure and compare speech perception in users of electric-acoustic stimulation (EAS) supported by a hearing aid in the unimplanted ear and in bilateral cochlear implant (CI) users under different noise and sound field conditions. Gap listening was assessed by comparing performance in unmodulated and modulated Comité Consultatif International Téléphonique et Télégraphique (CCITT) noise conditions, and binaural interaction was investigated by comparing single source and multisource sound fields. Speech perception in noise was measured using a closed-set sentence test (Oldenburg Sentence Test, OLSA) in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources and a single source in frontal position (S0N0). Speech simulating noise (Fastl-noise), CCITT-noise (continuous), and OLSA-noise (pseudo continuous) served as noise sources with different temporal patterns. Speech tests were performed in two groups of subjects who were using either EAS (n = 12) or bilateral CIs (n = 10). All subjects in the EAS group were fitted with a high-power hearing aid in the opposite ear (bimodal EAS). The average group score on monosyllable in quiet was 68.8% (EAS) and 80.5% (bilateral CI). A group of 22 listeners with normal hearing served as controls to compare and evaluate potential gap listening effects in implanted patients. Average speech reception thresholds in the EAS group were significantly lower than those for the bilateral CI group in all test conditions (CCITT 6.1 dB, p = 0.001; Fastl-noise 5.4 dB, p < 0.01; Oldenburg-(OL)-noise 1.6 dB, p < 0.05). Bilateral CI and EAS user groups showed a significant improvement of 4.3 dB (p = 0.004) and 5.4 dB (p = 0.002) between S0N0 and MSNF sound field conditions respectively, which signifies advantages caused by bilateral interaction in both groups. Performance in the control group showed a significant gap listening effect with a difference of 6.5 dB between modulated and unmodulated noise in S0N0, and a difference of 3.0 dB in MSNF. The ability to "glimpse" into short temporal masker gaps was absent in both groups of implanted subjects. Combined EAS in one ear supported by a hearing aid on the contralateral ear provided significantly improved speech perception compared with bilateral cochlear implantation. Although the scores for monosyllable words in quiet were higher in the bilateral CI group, the EAS group performed better in different noise and sound field conditions. Furthermore, the results indicated that binaural interaction between EAS in one ear and residual acoustic hearing in the opposite ear enhances speech perception in complex noise situations. Both bilateral CI and bimodal EAS users did not benefit from short temporal masker gaps, therefore the better performance of the EAS group in modulated noise conditions could be explained by the improved transmission of fundamental frequency cues in the lower-frequency region of acoustic hearing, which might foster the grouping of auditory objects.

  7. A simulation of streaming flows associated with acoustic levitators

    NASA Astrophysics Data System (ADS)

    Rednikov, A.; Riley, N.

    2002-04-01

    Steady-state acoustic streaming flow patterns have been observed by Trinh and Robey [Phys. Fluids 6, 3567 (1994)], during the operation of a variety of single axis ultrasonic levitators in a gaseous environment. Microstreaming around levitated samples is superimposed on the streaming flow which is observed in the levitator even in the absence of any particle therein. In this paper, by physical arguments, numerical and analytical simulations we provide entirely satisfactory interpretations of the observed flow patterns in both isothermal and nonisothermal situations.

  8. Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

    PubMed

    Paraouty, Nihaad; Stasiak, Arkadiusz; Lorenzi, Christian; Varnet, Léo; Winter, Ian M

    2018-04-25

    Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues. SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway. Copyright © 2018 the authors 0270-6474/18/384123-15$15.00/0.

  9. Listening to sound patterns as a dynamic activity

    NASA Astrophysics Data System (ADS)

    Jones, Mari Riess

    2003-04-01

    The act of listening to a series of sounds created by some natural event is described as involving an entrainmentlike process that transpires in real time. Some aspects of this dynamic process are suggested. In particular, real-time attending is described in terms of an adaptive synchronization activity that permits a listener to target attending energy to forthcoming elements within an acoustical pattern (e.g., music, speech, etc.). Also described are several experiments that illustrate features of this approach as it applies to attending to musiclike patterns. These involve listeners' responses to changes in either the timing or the pitch structure (or both) of various acoustical sequences.

  10. Temporal Patterns of Behavior from the Scheduling of Psychology Quizzes

    ERIC Educational Resources Information Center

    Jarmolowicz, David P.; Hayashi, Yusuke; St. Peter Pipkin, Claire

    2010-01-01

    Temporal patterns of behavior have been observed in real-life performances such as bill passing in the U.S. Congress, in-class studying, and quiz taking. However, the practical utility of understanding these patterns has not been evaluated. The current study demonstrated the presence of temporal patterns of quiz taking in a university-level…

  11. Hyperventilation-induced nystagmus in a large series of vestibular patients.

    PubMed

    Califano, L; Melillo, M G; Vassallo, A; Mazzone, S

    2011-02-01

    The Hyperventilation Test is widely used in the "bed-side examination" of vestibular patients. It can either activate a latent nystagmus in central or peripheral vestibular diseases or it can interact with a spontaneous nystagmus, by reducing it or increasing it. Aims of this study were to determine the incidence, patterns and temporal characteristics of Hyperventilation-induced nystagmus in patients suffering from vestibular diseases, as well as its contribution to the differential diagnosis between vestibular neuritis and neuroma of the 8(th) cranial nerve, and its behaviour in some central vestibular diseases. The present study includes 1202 patients featuring, at vestibular examination, at least one sign of vestibular system disorders or patients diagnosed with a "Migraine-related vertigo" or "Chronic subjective dizziness". The overall incidence of Hyperventilation-induced nystagmus was 21.9%. It was detected more frequently in retrocochlear vestibular diseases rather than in end-organ vestibular diseases: 5.3% in Paroxysmal Positional Vertigo, 37.1% in Menière's disease, 37.6% in compensated vestibular neuritis, 77.2% in acute vestibular neuritis and 91.7% in neuroma of the 8(th) cranial nerve. In acute vestibular neuritis, three HVIN patterns were observed: Paretic pattern: temporary enhancement of the spontaneous nystagmus; Excitatory pattern: temporary inhibition of the spontaneous nystagmus; Strong excitatory pattern: temporary inversion of the spontaneous nystagmus. Excitatory patterns proved to be time-dependent in that they disappeared and were replaced by the paretic pattern over a period of maximum 18 days since the beginning of the disorder. In acoustic neuroma, Hyperventilation-induced nystagmus was frequently observed (91.7%), either in the form of an excitatory pattern (fast phases towards the affected site) or in the form of a paretic pattern (fast phases towards the healthy side). The direction of the nystagmus is only partially related to tumour size, whereas other mechanisms, such as demyelination or a break in nerve fibres, might have an important role in triggering the situation. Hyperventilation-induced nystagmus has frequently been detected in cases of demyelinating diseases and in cerebellar diseases: in multiple sclerosis, hyperventilation inhibits a central type of spontaneous nystagmus or evokes nystagmus in 75% of patients; in cerebellar diseases, hyperventilation evokes or enhances a central spontaneous nystagmus in 72.7% of patients. In conclusion the Hyperventilation Test can provide patterns of oculomotor responses that indicate a diagnostic investigation through cerebral magnetic resonance imaging enhanced by gadolinium, upon suspicion of neuroma of the 8(th) cranial nerve or of a central disease. In our opinion, however, Hyperventilation-induced nystagmus always needs to be viewed within the more general context of a complete examination of the vestibular and acoustic system.

  12. PubMed Central

    CALIFANO, L.; MELILLO, M.G.; VASSALLO, A.; MAZZONE, S.

    2011-01-01

    SUMMARY The Hyperventilation Test is widely used in the "bed-side examination" of vestibular patients. It can either activate a latent nystagmus in central or peripheral vestibular diseases or it can interact with a spontaneous nystagmus, by reducing it or increasing it. Aims of this study were to determine the incidence, patterns and temporal characteristics of Hyperventilation-induced nystagmus in patients suffering from vestibular diseases, as well as its contribution to the differential diagnosis between vestibular neuritis and neuroma of the 8th cranial nerve, and its behaviour in some central vestibular diseases. The present study includes 1202 patients featuring, at vestibular examination, at least one sign of vestibular system disorders or patients diagnosed with a "Migraine-related vertigo" or "Chronic subjective dizziness". The overall incidence of Hyperventilation-induced nystagmus was 21.9%. It was detected more frequently in retrocochlear vestibular diseases rather than in end-organ vestibular diseases: 5.3% in Paroxysmal Positional Vertigo, 37.1% in Menière's disease, 37.6% in compensated vestibular neuritis, 77.2% in acute vestibular neuritis and 91.7% in neuroma of the 8th cranial nerve. In acute vestibular neuritis, three HVIN patterns were observed: Paretic pattern: temporary enhancement of the spontaneous nystagmus; Excitatory pattern: temporary inhibition of the spontaneous nystagmus; Strong excitatory pattern: temporary inversion of the spontaneous nystagmus. Excitatory patterns proved to be time-dependent in that they disappeared and were replaced by the paretic pattern over a period of maximum 18 days since the beginning of the disorder. In acoustic neuroma, Hyperventilation-induced nystagmus was frequently observed (91.7%), either in the form of an excitatory pattern (fast phases towards the affected site) or in the form of a paretic pattern (fast phases towards the healthy side). The direction of the nystagmus is only partially related to tumour size, whereas other mechanisms, such as demyelination or a break in nerve fibres, might have an important role in triggering the situation. Hyperventilation-induced nystagmus has frequently been detected in cases of demyelinating diseases and in cerebellar diseases: in multiple sclerosis, hyperventilation inhibits a central type of spontaneous nystagmus or evokes nystagmus in 75% of patients; in cerebellar diseases, hyperventilation evokes or enhances a central spontaneous nystagmus in 72.7% of patients. In conclusion the Hyperventilation Test can provide patterns of oculomotor responses that indicate a diagnostic investigation through cerebral magnetic resonance imaging enhanced by gadolinium, upon suspicion of neuroma of the 8th cranial nerve or of a central disease. In our opinion, however, Hyperventilation-induced nystagmus always needs to be viewed within the more general context of a complete examination of the vestibular and acoustic system. PMID:21808459

  13. Mining Temporal Patterns to Improve Agents Behavior: Two Case Studies

    NASA Astrophysics Data System (ADS)

    Fournier-Viger, Philippe; Nkambou, Roger; Faghihi, Usef; Nguifo, Engelbert Mephu

    We propose two mechanisms for agent learning based on the idea of mining temporal patterns from agent behavior. The first one consists of extracting temporal patterns from the perceived behavior of other agents accomplishing a task, to learn the task. The second learning mechanism consists in extracting temporal patterns from an agent's own behavior. In this case, the agent then reuses patterns that brought self-satisfaction. In both cases, no assumption is made on how the observed agents' behavior is internally generated. A case study with a real application is presented to illustrate each learning mechanism.

  14. Modulation rate transfer functions from four species of stranded odontocete (Stenella longirostris, Feresa attenuata, Globicephala melas, and Mesoplodon densirostris).

    PubMed

    Smith, Adam B; Pacini, Aude F; Nachtigall, Paul E

    2018-04-01

    Odontocete marine mammals explore the environment by rapidly producing echolocation signals and receiving the corresponding echoes, which likewise return at very rapid rates. Thus, it is important that the auditory system has a high temporal resolution to effectively process and extract relevant information from click echoes. This study used auditory evoked potential methods to investigate auditory temporal resolution of individuals from four different odontocete species, including a spinner dolphin (Stenella longirostris), pygmy killer whale (Feresa attenuata), long-finned pilot whale (Globicephala melas), and Blainville's beaked whale (Mesoplodon densirostris). Each individual had previously stranded and was undergoing rehabilitation. Auditory Brainstem Responses (ABRs) were elicited via acoustic stimuli consisting of a train of broadband tone pulses presented at rates between 300 and 2000 Hz. Similar to other studied species, modulation rate transfer functions (MRTFs) of the studied individuals followed the shape of a low-pass filter, with the ability to process acoustic stimuli at presentation rates up to and exceeding 1250 Hz. Auditory integration times estimated from the bandwidths of the MRTFs ranged between 250 and 333 µs. The results support the hypothesis that high temporal resolution is conserved throughout the diverse range of odontocete species.

  15. A generalized baleen whale call detection and classification system.

    PubMed

    Baumgartner, Mark F; Mussoline, Sarah E

    2011-05-01

    Passive acoustic monitoring allows the assessment of marine mammal occurrence and distribution at greater temporal and spatial scales than is now possible with traditional visual surveys. However, the large volume of acoustic data and the lengthy and laborious task of manually analyzing these data have hindered broad application of this technique. To overcome these limitations, a generalized automated detection and classification system (DCS) was developed to efficiently and accurately identify low-frequency baleen whale calls. The DCS (1) accounts for persistent narrowband and transient broadband noise, (2) characterizes temporal variation of dominant call frequencies via pitch-tracking, and (3) classifies calls based on attributes of the resulting pitch tracks using quadratic discriminant function analysis (QDFA). Automated detections of sei whale (Balaenoptera borealis) downsweep calls and North Atlantic right whale (Eubalaena glacialis) upcalls were evaluated using recordings collected in the southwestern Gulf of Maine during the spring seasons of 2006 and 2007. The accuracy of the DCS was similar to that of a human analyst: variability in differences between the DCS and an analyst was similar to that between independent analysts, and temporal variability in call rates was similar among the DCS and several analysts.

  16. Drag Measurements of Porous Plate Acoustic Liners

    NASA Technical Reports Server (NTRS)

    Wolter, John D.

    2005-01-01

    This paper presents the results of direct drag measurements on a variety of porous plate acoustic liners. The existing literature describes numerous studies of drag on porous walls with injection or suction, but relatively few of drag on porous plates with neither injection nor suction. Furthermore, the porosity of the porous plate in existing studies is much lower than typically used in acoustic liners. In the present work, the acoustic liners consisted of a perforated face sheet covering a bulk acoustic absorber material. Factors that were varied in the experiment were hole diameter, hole pattern, face sheet thickness, bulk material type, and size of the gap (if any) between the face sheet and the absorber material.

  17. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    PubMed

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Tactile objects based on an amplitude disturbed diffraction pattern method

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Nikolovski, Jean-Pierre; Mechbal, Nazih; Hafez, Moustapha; Vergé, Michel

    2009-12-01

    Tactile sensing is becoming widely used in human-computer interfaces. Recent advances in acoustic approaches demonstrated the possibilities to transform ordinary solid objects into interactive interfaces. This letter proposes a static finger contact localization process using an amplitude disturbed diffraction pattern method. The localization method is based on the following physical phenomenon: a finger contact modifies the energy distribution of acoustic wave in a solid; these variations depend on the wave frequency and the contact position. The presented method first consists of exciting the object with an acoustic signal with plural frequency components. In a second step, a measured acoustic signal is compared with prerecorded values to deduce the contact position. This position is then used for human-machine interaction (e.g., finger tracking on computer screen). The selection of excitation signals is discussed and a frequency choice criterion based on contrast value is proposed. Tests on a sandwich plate (liquid crystal display screen) prove the simplicity and easiness to apply the process in various solids.

  19. The development of motor synergies in children: Ultrasound and acoustic measurements

    PubMed Central

    Noiray, Aude; Ménard, Lucie; Iskarous, Khalil

    2013-01-01

    The present study focuses on differences in lingual coarticulation between French children and adults. The specific question pursued is whether 4–5 year old children have already acquired a synergy observed in adults in which the tongue back helps the tip in the formation of alveolar consonants. Locus equations, estimated from acoustic and ultrasound imaging data were used to compare coarticulation degree between adults and children and further investigate differences in motor synergy between the front and back parts of the tongue. Results show similar slope and intercept patterns for adults and children in both the acoustic and articulatory domains, with an effect of place of articulation in both groups between alveolar and non-alveolar consonants. These results suggest that 4–5 year old children (1) have learned the motor synergy investigated and (2) have developed a pattern of coarticulatory resistance depending on a consonant place of articulation. Also, results show that acoustic locus equations can be used to gauge the presence of motor synergies in children. PMID:23297916

  20. Baseline acoustic levels of the NASA Active Noise Control Fan rig

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.; Heidelberg, Laurence J.; Elliott, David M.; Nallasamy, M.

    1996-01-01

    Extensive measurements of the spinning acoustic mode structure in the NASA 48 inch Active Noise Control Fan (ANCF) test rig have been taken. A continuously rotating microphone rake system with a least-squares data reduction technique was employed to measure these modes in the inlet and exhaust. Farfield directivity patterns in an anechoic environment were also measured at matched corrected rotor speeds. Several vane counts and spacings were tested over a range of rotor speeds. The Eversman finite element radiation code was run with the measured in-duct modes as input and the computed farfield results were compared to the experimentally measured directivity pattern. The experimental data show that inlet spinning mode measurements can be made very accurately. Exhaust mode measurements may have wake interference, but the least-squares reduction does a good job of rejecting the non-acoustic pressure. The Eversman radiation code accurately extrapolates the farfield levels and directivity pattern when all in-duct modes are included.

  1. Biodiversity Sampling Using a Global Acoustic Approach: Contrasting Sites with Microendemics in New Caledonia

    PubMed Central

    Gasc, Amandine; Sueur, Jérôme; Pavoine, Sandrine; Pellens, Roseli; Grandcolas, Philippe

    2013-01-01

    New Caledonia is a Pacific island with a unique biodiversity showing an extreme microendemism. Many species distributions observed on this island are extremely restricted, localized to mountains or rivers making biodiversity evaluation and conservation a difficult task. A rapid biodiversity assessment method based on acoustics was recently proposed. This method could help to document the unique spatial structure observed in New Caledonia. Here, this method was applied in an attempt to reveal differences among three mountain sites (Mandjélia, Koghis and Aoupinié) with similar ecological features and species richness level, but with high beta diversity according to different microendemic assemblages. In each site, several local acoustic communities were sampled with audio recorders. An automatic acoustic sampling was run on these three sites for a period of 82 successive days. Acoustic properties of animal communities were analysed without any species identification. A frequency spectral complexity index (NP) was used as an estimate of the level of acoustic activity and a frequency spectral dissimilarity index (Df) assessed acoustic differences between pairs of recordings. As expected, the index NP did not reveal significant differences in the acoustic activity level between the three sites. However, the acoustic variability estimated by the index Df, could first be explained by changes in the acoustic communities along the 24-hour cycle and second by acoustic dissimilarities between the three sites. The results support the hypothesis that global acoustic analyses can detect acoustic differences between sites with similar species richness and similar ecological context, but with different species assemblages. This study also demonstrates that global acoustic methods applied at broad spatial and temporal scales could help to assess local biodiversity in the challenging context of microendemism. The method could be deployed over large areas, and could help to compare different sites and determine conservation priorities. PMID:23734245

  2. Biodiversity sampling using a global acoustic approach: contrasting sites with microendemics in New Caledonia.

    PubMed

    Gasc, Amandine; Sueur, Jérôme; Pavoine, Sandrine; Pellens, Roseli; Grandcolas, Philippe

    2013-01-01

    New Caledonia is a Pacific island with a unique biodiversity showing an extreme microendemism. Many species distributions observed on this island are extremely restricted, localized to mountains or rivers making biodiversity evaluation and conservation a difficult task. A rapid biodiversity assessment method based on acoustics was recently proposed. This method could help to document the unique spatial structure observed in New Caledonia. Here, this method was applied in an attempt to reveal differences among three mountain sites (Mandjélia, Koghis and Aoupinié) with similar ecological features and species richness level, but with high beta diversity according to different microendemic assemblages. In each site, several local acoustic communities were sampled with audio recorders. An automatic acoustic sampling was run on these three sites for a period of 82 successive days. Acoustic properties of animal communities were analysed without any species identification. A frequency spectral complexity index (NP) was used as an estimate of the level of acoustic activity and a frequency spectral dissimilarity index (Df ) assessed acoustic differences between pairs of recordings. As expected, the index NP did not reveal significant differences in the acoustic activity level between the three sites. However, the acoustic variability estimated by the index Df , could first be explained by changes in the acoustic communities along the 24-hour cycle and second by acoustic dissimilarities between the three sites. The results support the hypothesis that global acoustic analyses can detect acoustic differences between sites with similar species richness and similar ecological context, but with different species assemblages. This study also demonstrates that global acoustic methods applied at broad spatial and temporal scales could help to assess local biodiversity in the challenging context of microendemism. The method could be deployed over large areas, and could help to compare different sites and determine conservation priorities.

  3. An experimental device for characterizing degassing processes and related elastic fingerprints: Analog volcano seismo-acoustic observations.

    PubMed

    Spina, Laura; Morgavi, Daniele; Cannata, Andrea; Campeggi, Carlo; Perugini, Diego

    2018-05-01

    A challenging objective of modern volcanology is to quantitatively characterize eruptive/degassing regimes from geophysical signals (in particular seismic and infrasonic), for both research and monitoring purposes. However, the outcomes of the attempts made so far are still considered very uncertain because volcanoes remain inaccessible when deriving quantitative information on crucial parameters such as plumbing system geometry and magma viscosity. In order to improve our knowledge of volcanic systems, a novel experimental device, which is capable of mimicking volcanic degassing processes with different regimes and gas flow rates, and allowing for the investigation of the related seismo-acoustic emissions, was designed and developed. The benefits of integrating observations on real volcanoes with seismo-acoustic signals generated in laboratory are many and include (i) the possibility to fix the controlling parameters such as the geometry of the structure where the gas flows, the gas flow rate, and the fluid viscosity; (ii) the possibility of performing acoustic measurements at different azimuthal and zenithal angles around the opening of the analog conduit, hence constraining the radiation pattern of different acoustic sources; (iii) the possibility to measure micro-seismic signals in distinct points of the analog conduit; (iv) finally, thanks to the transparent structure, it is possible to directly observe the degassing pattern through the optically clear analog magma and define the degassing regime producing the seismo-acoustic radiations. The above-described device represents a step forward in the analog volcano seismo-acoustic measurements.

  4. An experimental device for characterizing degassing processes and related elastic fingerprints: Analog volcano seismo-acoustic observations

    NASA Astrophysics Data System (ADS)

    Spina, Laura; Morgavi, Daniele; Cannata, Andrea; Campeggi, Carlo; Perugini, Diego

    2018-05-01

    A challenging objective of modern volcanology is to quantitatively characterize eruptive/degassing regimes from geophysical signals (in particular seismic and infrasonic), for both research and monitoring purposes. However, the outcomes of the attempts made so far are still considered very uncertain because volcanoes remain inaccessible when deriving quantitative information on crucial parameters such as plumbing system geometry and magma viscosity. In order to improve our knowledge of volcanic systems, a novel experimental device, which is capable of mimicking volcanic degassing processes with different regimes and gas flow rates, and allowing for the investigation of the related seismo-acoustic emissions, was designed and developed. The benefits of integrating observations on real volcanoes with seismo-acoustic signals generated in laboratory are many and include (i) the possibility to fix the controlling parameters such as the geometry of the structure where the gas flows, the gas flow rate, and the fluid viscosity; (ii) the possibility of performing acoustic measurements at different azimuthal and zenithal angles around the opening of the analog conduit, hence constraining the radiation pattern of different acoustic sources; (iii) the possibility to measure micro-seismic signals in distinct points of the analog conduit; (iv) finally, thanks to the transparent structure, it is possible to directly observe the degassing pattern through the optically clear analog magma and define the degassing regime producing the seismo-acoustic radiations. The above-described device represents a step forward in the analog volcano seismo-acoustic measurements.

  5. Spatio-temporal Analysis for New York State SPARCS Data

    PubMed Central

    Chen, Xin; Wang, Yu; Schoenfeld, Elinor; Saltz, Mary; Saltz, Joel; Wang, Fusheng

    2017-01-01

    Increased accessibility of health data provides unique opportunities to discover spatio-temporal patterns of diseases. For example, New York State SPARCS (Statewide Planning and Research Cooperative System) data collects patient level detail on patient demographics, diagnoses, services, and charges for each hospital inpatient stay and outpatient visit. Such data also provides home addresses for each patient. This paper presents our preliminary work on spatial, temporal, and spatial-temporal analysis of disease patterns for New York State using SPARCS data. We analyzed spatial distribution patterns of typical diseases at ZIP code level. We performed temporal analysis of common diseases based on 12 years’ historical data. We then compared the spatial variations for diseases with different levels of clustering tendency, and studied the evolution history of such spatial patterns. Case studies based on asthma demonstrated that the discovered spatial clusters are consistent with prior studies. We visualized our spatial-temporal patterns as animations through videos. PMID:28815148

  6. Tree invasion of a montane meadow complex: temporal trends, spatial patterns, and biotic interactions

    Treesearch

    Charles B. Halpern; Joseph A. Antos; Janine M. Rice; Ryan D. Haugo; Nicole L. Lang

    2010-01-01

    We combined spatial point pattern analysis, population age structures, and a time-series of stem maps to quantify spatial and temporal patterns of conifer invasion over a 200-yr period in three plots totaling 4 ha. In combination, spatial and temporal patterns of establishment suggest an invasion process shaped by biotic interactions, with facilitation promoting...

  7. Buoyancy characteristics of the bloater (Coregonus hoyi) in relation to patterns of vertical migration and acoustic backscattering

    USGS Publications Warehouse

    Fleischer, Guy W.; TeWinkel, Leslie M.

    1998-01-01

    Acoustic studies in Lake Michigan found that bloaters (Coregonus hoyi) were less reflective per size than the other major pelagic species. This difference in in situ acoustic backscattering could indicate that the deep-water bloaters have compressed swimbladders for much of their vertical range with related implications on buoyancy. To test this hypothesis, the buoyancy characteristics of bloaters were determined with fish placed in a cage that was lowered to bottom and monitored with an underwater camera. We found bloaters were positively buoyant near surface, neutrally buoyant at intermediate strata, and negatively buoyant near bottom. This pattern was consistent for the range of depths bloaters occur. The depth of neutral buoyancy (near the 50-n strata) corresponds with the maximum extent of vertical migration for bloaters observed in acoustic surveys. Fish below this depth would be negatively buoyant which supports our contention that bloaters deeper in the water column have compressed swimbladders. Understanding the buoyancy characteristics of pelagic fishes will help to predict the effects of vertical migration on target strength measurement and confirms the use of acoustics as a tool to identify and quantify the ecological phenomenon of vertical migration.

  8. Acoustical study of the development of stop consonants in children

    NASA Astrophysics Data System (ADS)

    Imbrie, Annika K.

    2003-10-01

    This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 26-33) over a six month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of the coordination of articulation, phonation, and respiration for motor speech production. [Work supported by NIH Grants Nos. DC00038 and DC00075.

  9. Acoustical study of the development of stop consonants in children

    NASA Astrophysics Data System (ADS)

    Imbrie, Annika K.

    2004-05-01

    This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 2.6-3.3) over a 6-month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts, possibly due to greater compliance of the active articulator. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of motor speech production. [Work supported by NIH Grants DC00038 and DC00075.

  10. Shared developmental and evolutionary origins for neural basis of vocal–acoustic and pectoral–gestural signaling

    PubMed Central

    Bass, Andrew H.; Chagnaud, Boris P.

    2012-01-01

    Acoustic signaling behaviors are widespread among bony vertebrates, which include the majority of living fishes and tetrapods. Developmental studies in sound-producing fishes and tetrapods indicate that central pattern generating networks dedicated to vocalization originate from the same caudal hindbrain rhombomere (rh) 8-spinal compartment. Together, the evidence suggests that vocalization and its morphophysiological basis, including mechanisms of vocal–respiratory coupling that are widespread among tetrapods, are ancestral characters for bony vertebrates. Premotor-motor circuitry for pectoral appendages that function in locomotion and acoustic signaling develops in the same rh8-spinal compartment. Hence, vocal and pectoral phenotypes in fishes share both developmental origins and roles in acoustic communication. These findings lead to the proposal that the coupling of more highly derived vocal and pectoral mechanisms among tetrapods, including those adapted for nonvocal acoustic and gestural signaling, originated in fishes. Comparative studies further show that rh8 premotor populations have distinct neurophysiological properties coding for equally distinct behavioral attributes such as call duration. We conclude that neural network innovations in the spatiotemporal patterning of vocal and pectoral mechanisms of social communication, including forelimb gestural signaling, have their evolutionary origins in the caudal hindbrain of fishes. PMID:22723366

  11. Acoustic features of objects matched by an echolocating bottlenose dolphin.

    PubMed

    Delong, Caroline M; Au, Whitlow W L; Lemonds, David W; Harley, Heidi E; Roitblat, Herbert L

    2006-03-01

    The focus of this study was to investigate how dolphins use acoustic features in returning echolocation signals to discriminate among objects. An echolocating dolphin performed a match-to-sample task with objects that varied in size, shape, material, and texture. After the task was completed, the features of the object echoes were measured (e.g., target strength, peak frequency). The dolphin's error patterns were examined in conjunction with the between-object variation in acoustic features to identify the acoustic features that the dolphin used to discriminate among the objects. The present study explored two hypotheses regarding the way dolphins use acoustic information in echoes: (1) use of a single feature, or (2) use of a linear combination of multiple features. The results suggested that dolphins do not use a single feature across all object sets or a linear combination of six echo features. Five features appeared to be important to the dolphin on four or more sets: the echo spectrum shape, the pattern of changes in target strength and number of highlights as a function of object orientation, and peak and center frequency. These data suggest that dolphins use multiple features and integrate information across echoes from a range of object orientations.

  12. Model helicopter rotor high-speed impulsive noise: Measured acoustics and blade pressures

    NASA Technical Reports Server (NTRS)

    Boxwell, D. A.; Schmitz, F. H.; Splettstoesser, W. R.; Schultz, K. J.

    1983-01-01

    A 1/17-scale research model of the AH-1 series helicopter main rotor was tested. Model-rotor acoustic and simultaneous blade pressure data were recorded at high speeds where full-scale helicopter high-speed impulsive noise levels are known to be dominant. Model-rotor measurements of the peak acoustic pressure levels, waveform shapes, and directively patterns are directly compared with full-scale investigations, using an equivalent in-flight technique. Model acoustic data are shown to scale remarkably well in shape and in amplitude with full-scale results. Model rotor-blade pressures are presented for rotor operating conditions both with and without shock-like discontinuities in the radiated acoustic waveform. Acoustically, both model and full-scale measurements support current evidence that above certain high subsonic advancing-tip Mach numbers, local shock waves that exist on the rotor blades ""delocalize'' and radiate to the acoustic far-field.

  13. High speed imaging of bubble clouds generated in pulsed ultrasound cavitational therapy--histotripsy.

    PubMed

    Xu, Zhen; Raghavan, Mekhala; Hall, Timothy L; Chang, Ching-Wei; Mycek, Mary-Ann; Fowlkes, J Brian; Cain, Charles A

    2007-10-01

    Our recent studies have demonstrated that mechanical fractionation of tissue structure with sharply demarcated boundaries can be achieved using short (< 20 micros), high intensity ultrasound pulses delivered at low duty cycles. We have called this technique histotripsy. Histotripsy has potential clinical applications where noninvasive tissue fractionation and/or tissue removal are desired. The primary mechanism of histotripsy is thought to be acoustic cavitation, which is supported by a temporally changing acoustic backscatter observed during the histotripsy process. In this paper, a fast-gated digital camera was used to image the hypothesized cavitating bubble cloud generated by histotripsy pulses. The bubble cloud was produced at a tissue-water interface and inside an optically transparent gelatin phantom which mimics bulk tissue. The imaging shows the following: (1) Initiation of a temporally changing acoustic backscatter was due to the formation of a bubble cloud; (2) The pressure threshold to generate a bubble cloud was lower at a tissue-fluid interface than inside bulk tissue; and (3) at higher pulse pressure, the bubble cloud lasted longer and grew larger. The results add further support to the hypothesis that the histotripsy process is due to a cavitating bubble cloud and may provide insight into the sharp boundaries of histotripsy lesions.

  14. Assessment of temporal state-dependent interactions between auditory fMRI responses to desired and undesired acoustic sources.

    PubMed

    Olulade, O; Hu, S; Gonzalez-Castillo, J; Tamer, G G; Luh, W-M; Ulmer, J L; Talavage, T M

    2011-07-01

    A confounding factor in auditory functional magnetic resonance imaging (fMRI) experiments is the presence of the acoustic noise inherently associated with the echo planar imaging acquisition technique. Previous studies have demonstrated that this noise can induce unwanted neuronal responses that can mask stimulus-induced responses. Similarly, activation accumulated over multiple stimuli has been demonstrated to elevate the baseline, thus reducing the dynamic range available for subsequent responses. To best evaluate responses to auditory stimuli, it is necessary to account for the presence of all recent acoustic stimulation, beginning with an understanding of the attenuating effects brought about by interaction between and among induced unwanted neuronal responses, and responses to desired auditory stimuli. This study focuses on the characterization of the duration of this temporal memory and qualitative assessment of the associated response attenuation. Two experimental parameters--inter-stimulus interval (ISI) and repetition time (TR)--were varied during an fMRI experiment in which participants were asked to passively attend to an auditory stimulus. Results present evidence of a state-dependent interaction between induced responses. As expected, attenuating effects of these interactions become less significant as TR and ISI increase and in contrast to previous work, persist up to 18s after a stimulus presentation. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. ‘Inner voices’: the cerebral representation of emotional voice cues described in literary texts

    PubMed Central

    Kreifelts, Benjamin; Gößling-Arnold, Christina; Wertheimer, Jürgen; Wildgruber, Dirk

    2014-01-01

    While non-verbal affective voice cues are generally recognized as a crucial behavioral guide in any day-to-day conversation their role as a powerful source of information may extend well beyond close-up personal interactions and include other modes of communication such as written discourse or literature as well. Building on the assumption that similarities between the different ‘modes’ of voice cues may not only be limited to their functional role but may also include cerebral mechanisms engaged in the decoding process, the present functional magnetic resonance imaging study aimed at exploring brain responses associated with processing emotional voice signals described in literary texts. Emphasis was placed on evaluating ‘voice’ sensitive as well as task- and emotion-related modulations of brain activation frequently associated with the decoding of acoustic vocal cues. Obtained findings suggest that several similarities emerge with respect to the perception of acoustic voice signals: results identify the superior temporal, lateral and medial frontal cortex as well as the posterior cingulate cortex and cerebellum to contribute to the decoding process, with similarities to acoustic voice perception reflected in a ‘voice’-cue preference of temporal voice areas as well as an emotion-related modulation of the medial frontal cortex and a task-modulated response of the lateral frontal cortex. PMID:24396008

  16. Using a numerical model to understand the connection between the ocean and acoustic travel-time measurements.

    PubMed

    Powell, Brian S; Kerry, Colette G; Cornuelle, Bruce D

    2013-10-01

    Measurements of acoustic ray travel-times in the ocean provide synoptic integrals of the ocean state between source and receiver. It is known that the ray travel-time is sensitive to variations in the ocean at the transmission time, but the sensitivity of the travel-time to spatial variations in the ocean prior to the acoustic transmission have not been quantified. This study examines the sensitivity of ray travel-time to the temporally and spatially evolving ocean state in the Philippine Sea using the adjoint of a numerical model. A one year series of five day backward integrations of the adjoint model quantify the sensitivity of travel-times to varying dynamics that can alter the travel-time of a 611 km ray by 200 ms. The early evolution of the sensitivities reveals high-mode internal waves that dissipate quickly, leaving the lowest three modes, providing a connection to variations in the internal tide generation prior to the sample time. They are also strongly sensitive to advective effects that alter density along the ray path. These sensitivities reveal how travel-time measurements are affected by both nearby and distant waters. Temporal nonlinearity of the sensitivities suggests that prior knowledge of the ocean state is necessary to exploit the travel-time observations.

  17. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  18. High Speed Imaging of Bubble Clouds Generated in Pulsed Ultrasound Cavitational Therapy—Histotripsy

    PubMed Central

    Xu, Zhen; Raghavan, Mekhala; Hall, Timothy L.; Chang, Ching-Wei; Mycek, Mary-Ann; Fowlkes, J. Brian; Cain, Charles A.

    2009-01-01

    Our recent studies have demonstrated that mechanical fractionation of tissue structure with sharply demarcated boundaries can be achieved using short (<20 μs), high intensity ultrasound pulses delivered at low duty cycles. We have called this technique histotripsy. Histotripsy has potential clinical applications where noninvasive tissue fractionation and/or tissue removal are desired. The primary mechanism of histotripsy is thought to be acoustic cavitation, which is supported by a temporally changing acoustic backscatter observed during the histotripsy process. In this paper, a fast-gated digital camera was used to image the hypothesized cavitating bubble cloud generated by histotripsy pulses. The bubble cloud was produced at a tissue-water interface and inside an optically transparent gelatin phantom which mimics bulk tissue. The imaging shows the following: 1) Initiation of a temporally changing acoustic backscatter was due to the formation of a bubble cloud; 2) The pressure threshold to generate a bubble cloud was lower at a tissue-fluid interface than inside bulk tissue; and 3) at higher pulse pressure, the bubble cloud lasted longer and grew larger. The results add further support to the hypothesis that the histotripsy process is due to a cavitating bubble cloud and may provide insight into the sharp boundaries of histotripsy lesions. PMID:18019247

  19. Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues

    PubMed

    Liu, Andrew S K; Tsunada, Joji; Gold, Joshua I; Cohen, Yale E

    2015-01-01

    Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.

  20. An improved genetic algorithm for designing optimal temporal patterns of neural stimulation

    NASA Astrophysics Data System (ADS)

    Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.

    2017-12-01

    Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.

  1. Study of abrasive wear process of lining of grinding chamber of vortex-acoustic disperser

    NASA Astrophysics Data System (ADS)

    Perelygin, D. N.

    2018-03-01

    The theoretical and experimental studies of the process of gas-abrasive wear of the lining of a vortex-acoustic disperser made it possible to establish the conditions and patterns of their occurrence and also to develop proposals for its reduction.

  2. Improved acoustic levitation apparatus

    NASA Technical Reports Server (NTRS)

    Berge, L. H.; Johnson, J. L.; Oran, W. A.; Reiss, D. A.

    1980-01-01

    Concave driver and reflector enhance and shape levitation forces in acoustic resonance system. Single-mode standing-wave pattern is focused by ring element situated between driver and reflector. Concave surfaces increase levitating forces up to factor of 6 as opposed to conventional flat surfaces, making it possible to suspend heavier objects.

  3. Automated pattern analysis: A newsilent partner in insect acoustic detection studies

    USDA-ARS?s Scientific Manuscript database

    This seminar reviews methods that have been developed for automated analysis of field-collected sounds used to estimate pest populations and guide insect pest management decisions. Several examples are presented of successful usage of acoustic technology to map insect distributions in field environ...

  4. Intensity invariance properties of auditory neurons compared to the statistics of relevant natural signals in grasshoppers.

    PubMed

    Clemens, Jan; Weschke, Gerroth; Vogel, Astrid; Ronacher, Bernhard

    2010-04-01

    The temporal pattern of amplitude modulations (AM) is often used to recognize acoustic objects. To identify objects reliably, intensity invariant representations have to be formed. We approached this problem within the auditory pathway of grasshoppers. We presented AM patterns modulated at different time scales and intensities. Metric space analysis of neuronal responses allowed us to determine how well, how invariantly, and at which time scales AM frequency is encoded. We find that in some neurons spike-count cues contribute substantially (20-60%) to the decoding of AM frequency at a single intensity. However, such cues are not robust when intensity varies. The general intensity invariance of the system is poor. However, there exists a range of AM frequencies around 83 Hz where intensity invariance of local interneurons is relatively high. In this range, natural communication signals exhibit much variation between species, suggesting an important behavioral role for this frequency band. We hypothesize, just as has been proposed for human speech, that the communication signals might have evolved to match the processing properties of the receivers. This contrasts with optimal coding theory, which postulates that neuronal systems are adapted to the statistics of the relevant signals.

  5. A new evaluation of heat distribution on facial skin surface by infrared thermography.

    PubMed

    Haddad, Denise S; Brioschi, Marcos L; Baladi, Marina G; Arita, Emiko S

    2016-01-01

    The aim of this study was to identify the facial areas defined by thermal gradient, in individuals compatible with the pattern of normality, and to quantify and describe them anatomically. The sample consisted of 161 volunteers, of both genders, aged between 26 and 84 years (63 ± 15 years). The results demonstrated that the thermal gradient areas suggested for the study were present in at least 95% of the thermograms evaluated and that there is significant difference in temperature between the genders, racial group and variables "odontalgia", "dental prothesis" and "history of migraine" (p < 0.05). Moreover, there was no statistically significant difference in the absolute temperatures between ages, and right and left sides of the face, in individuals compatible with the pattern of normality (ΔT = 0.11°C). The authors concluded that according to the suggested areas of thermal gradients, these were present in at least 95% of all the thermograms evaluated, and the areas of high intensity found in the face were medial palpebral commissure, labial commissure, temporal, supratrochlear and external acoustic meatus, whereas the points of low intensity were inferior labial, lateral palpebral commissure and nasolabial.

  6. The gap-startle paradigm to assess auditory temporal processing: Bridging animal and human research.

    PubMed

    Fournier, Philippe; Hébert, Sylvie

    2016-05-01

    The gap-prepulse inhibition of the acoustic startle (GPIAS) paradigm is the primary test used in animal research to identify gap detection thresholds and impairment. When a silent gap is presented shortly before a loud startling stimulus, the startle reflex is inhibited and the extent of inhibition is assumed to reflect detection. Here, we applied the same paradigm in humans. One hundred and fifty-seven normal-hearing participants were tested using one of five gap durations (5, 25, 50, 100, 200 ms) in one of the following two paradigms-gap-embedded in or gap-following-the continuous background noise. The duration-inhibition relationship was observable for both conditions but followed different patterns. In the gap-embedded paradigm, GPIAS increased significantly with gap duration up to 50 ms and then more slowly up to 200 ms (trend only). In contrast, in the gap-following paradigm, significant inhibition-different from 0--was observable only at gap durations from 50 to 200 ms. The finding that different patterns are found depending on gap position within the background noise is compatible with distinct mechanisms underlying each of the two paradigms. © 2016 Society for Psychophysiological Research.

  7. Band-limited Green's Functions for Quantitative Evaluation of Acoustic Emission Using the Finite Element Method

    NASA Technical Reports Server (NTRS)

    Leser, William P.; Yuan, Fuh-Gwo; Leser, William P.

    2013-01-01

    A method of numerically estimating dynamic Green's functions using the finite element method is proposed. These Green's functions are accurate in a limited frequency range dependent on the mesh size used to generate them. This range can often match or exceed the frequency sensitivity of the traditional acoustic emission sensors. An algorithm is also developed to characterize an acoustic emission source by obtaining information about its strength and temporal dependence. This information can then be used to reproduce the source in a finite element model for further analysis. Numerical examples are presented that demonstrate the ability of the band-limited Green's functions approach to determine the moment tensor coefficients of several reference signals to within seven percent, as well as accurately reproduce the source-time function.

  8. A compact time reversal emitter-receiver based on a leaky random cavity

    PubMed Central

    Luong, Trung-Dung; Hies, Thomas; Ohl, Claus-Dieter

    2016-01-01

    Time reversal acoustics (TRA) has gained widespread applications for communication and measurements. In general, a scattering medium in combination with multiple transducers is needed to achieve a sufficiently large acoustical aperture. In this paper, we report an implementation for a cost-effective and compact time reversal emitter-receiver driven by a single piezoelectric element. It is based on a leaky cavity with random 3-dimensional printed surfaces. The random surfaces greatly increase the spatio-temporal focusing quality as compared to flat surfaces and allow the focus of an acoustic beam to be steered over an angle of 41°. We also demonstrate its potential use as a scanner by embedding a receiver to detect an object from its backscatter without moving the TRA emitter. PMID:27811957

  9. Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals

    PubMed Central

    Baker, Christa A.; Ma, Lisa; Casareale, Chelsea R.

    2016-01-01

    In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8–12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. SIGNIFICANCE STATEMENT The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. PMID:27559179

  10. Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals.

    PubMed

    Baker, Christa A; Ma, Lisa; Casareale, Chelsea R; Carlson, Bruce A

    2016-08-24

    In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8-12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. Copyright © 2016 the authors 0270-6474/16/368985-16$15.00/0.

  11. Comparison between psycho-acoustics and physio-acoustic measurement to determine optimum reverberation time of pentatonic angklung music concert hall

    NASA Astrophysics Data System (ADS)

    Sudarsono, Anugrah S.; Merthayasa, I. G. N.; Suprijanto

    2015-09-01

    This research tried to compare psycho-acoustics and Physio-acoustic measurement to find the optimum reverberation time of soundfield from angklung music. Psycho-acoustic measurement was conducted using a paired comparison method and Physio-acoustic measurement was conducted with EEG Measurement on T3, T4, FP1, and FP2 measurement points. EEG measurement was conducted with 5 persons. Pentatonic angklung music was used as a stimulus with reverberation time variation. The variation was between 0.8 s - 1.6 s with 0.2 s step. EEG signal was analysed using a Power Spectral Density method on Alpha Wave, High Alpha Wave, and Theta Wave. Psycho-acoustic measurement on 50 persons showed that reverberation time preference of pentatonic angklung music was 1.2 second. The result was similar to Theta Wave measurement on FP2 measurement point. High Alpha wave on T4 measurement gave different results, but had similar patterns with psycho-acoustic measurement

  12. First images of thunder: Acoustic imaging of triggered lightning

    NASA Astrophysics Data System (ADS)

    Dayeh, M. A.; Evans, N. D.; Fuselier, S. A.; Trevino, J.; Ramaekers, J.; Dwyer, J. R.; Lucia, R.; Rassoul, H. K.; Kotovsky, D. A.; Jordan, D. M.; Uman, M. A.

    2015-07-01

    An acoustic camera comprising a linear microphone array is used to image the thunder signature of triggered lightning. Measurements were taken at the International Center for Lightning Research and Testing in Camp Blanding, FL, during the summer of 2014. The array was positioned in an end-fire orientation thus enabling the peak acoustic reception pattern to be steered vertically with a frequency-dependent spatial resolution. On 14 July 2014, a lightning event with nine return strokes was successfully triggered. We present the first acoustic images of individual return strokes at high frequencies (>1 kHz) and compare the acoustically inferred profile with optical images. We find (i) a strong correlation between the return stroke peak current and the radiated acoustic pressure and (ii) an acoustic signature from an M component current pulse with an unusual fast rise time. These results show that acoustic imaging enables clear identification and quantification of thunder sources as a function of lightning channel altitude.

  13. Shelf-Scale Mapping of Fish Distribution Using Active and Passive Acoustics

    NASA Astrophysics Data System (ADS)

    Wall, Carrie C.

    Fish sound production has been associated with courtship and spawning behavior. Acoustic recordings of fish sounds can be used to identify distribution and behavior. Passive acoustic monitoring (PAM) can record large amounts of acoustic data in a specific area for days to years. These data can be collected in remote locations under potentially unsafe seas throughout a 24-hour period providing datasets unattainable using observer-based methods. However, the instruments must withstand the caustic ocean environment and be retrieved to obtain the recorded data. This can prove difficult due to the risk of PAMs being lost, stolen or damaged, especially in highly active areas. In addition, point-source sound recordings are only one aspect of fish biogeography. Passive acoustic platforms that produce low self-generated noise, have high retrieval rates, and are equipped with a suite of environmental sensors are needed to relate patterns in fish sound production to concurrently collected oceanographic conditions on large, synoptic scales. The association of sound with reproduction further invokes the need for such non-invasive, near-real time datasets that can be used to enhance current management methods limited by survey bias, inaccurate fisher reports, and extensive delays between fisheries data collection and population assessment. Red grouper (Epinephelus morio) exhibit the distinctive behavior of digging holes and producing a unique sound during courtship. These behaviors can be used to identify red grouper distribution and potential spawning habitat over large spatial scales. The goal of this research was to provide a greater understanding of the temporal and spatial distribution of red grouper sound production and holes on the central West Florida Shelf (WFS) using active sonar and passive acoustic recorders. The technology demonstrated here establishes the necessary methods to map shelf-scale fish sound production. The results of this work could aid resource managers in determining critical spawning times and areas. Over 403,000 acoustic recordings were made across an approximately 39,000 km2 area on the WFS during periods throughout 2008 to 2011 using stationary passive acoustic recorders and hydrophone-integrated gliders. A custom MySQL database with a portal to MATLAB was developed to catalogue and process the large acoustic dataset stored on a server. Analyses of these data determined the daily, seasonal and spatial patterns of red grouper as well as toadfish and several unconfirmed fish species termed: 100 Hz Pulsing, 6 kHz Sound, 300 Hz FM Harmonic, and 365 Hz Harmonic. Red grouper sound production was correlated to sunrise and sunset, and was primarily recorded in water 15 to 93 m deep, with increased calling within known hard bottom areas and in Steamboat Lumps Marine Reserve. Analyses of high-resolution multibeam bathymetry collected in a portion of the reserve in 2006 and 2009 allowed detailed documentation and characterization of holes excavated by red grouper. Comparisons of the spatially overlapping datasets suggested holes are constructed and maintained over time, and provided evidence towards an increase in spawning habitat usage. High rates of sound production recorded from stationary recorders and a glider deployment were correlated to high hole density in Steamboat Lumps. This research demonstrates the utility of coupling passive acoustic data with high-resolution bathymetric data to verify the occupation of suspected male territory (holes) and to provide a more complete understanding of effective spawning habitat. Annual peaks in calling (July and August, and November and December) did not correspond to spawning peaks (March -- May); however, passive acoustic monitoring was established as an effective tool to identify areas of potential spawning activity by recording the presence of red grouper. Sounds produced by other species of fish were recorded in the passive acoustic dataset. The distribution of toadfish calls suggests two species (Opsanus beta and O. pardus) were recorded; the latter had not been previously described. The call characteristics and spatial distribution of the four unknown fish-related sounds can be used to help confirm the sources. Long-term PAM studies that provide systematic monitoring can be a valuable assessment tool for all soniferous species. Glider technology, due to a high rate of successful retrieval and low self-generated noise, was proven to be a reliable and relatively inexpensive method to collect fisheries acoustic data in the field. The implementation of regular deployments of hydrophone-integrated gliders and fixed location passive acoustic monitoring stations is suggested to enhance fisheries management.

  14. Spectro-temporal modulation masking patterns reveal frequency selectivity.

    PubMed

    Oetjen, Arne; Verhey, Jesko L

    2015-02-01

    The present study investigated the possibility that the human auditory system demonstrates frequency selectivity to spectro-temporal amplitude modulations. Threshold modulation depth for detecting sinusoidal spectro-temporal modulations was measured using a generalized masked threshold pattern paradigm with narrowband masker modulations. Four target spectro-temporal modulations were examined, differing in their temporal and spectral modulation frequencies: a temporal modulation of -8, 8, or 16 Hz combined with a spectral modulation of 1 cycle/octave and a temporal modulation of 4 Hz combined with a spectral modulation of 0.5 cycles/octave. The temporal center frequencies of the masker modulation ranged from 0.25 to 4 times the target temporal modulation. The spectral masker-modulation center-frequencies were 0, 0.5, 1, 1.5, and 2 times the target spectral modulation. For all target modulations, the pattern of average thresholds for the eight normal-hearing listeners was consistent with the hypothesis of a spectro-temporal modulation filter. Such a pattern of modulation-frequency sensitivity was predicted on the basis of psychoacoustical data for purely temporal amplitude modulations and purely spectral amplitude modulations. An analysis of separability indicates that, for the present data set, selectivity in the spectro-temporal modulation domain can be described by a combination of a purely spectral and a purely temporal modulation filter function.

  15. Prosodic structure shapes the temporal realization of intonation and manual gesture movements.

    PubMed

    Esteve-Gibert, Núria; Prieto, Pilar

    2013-06-01

    Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the gesture apex is anchored in the intonation peak and (b) the upcoming prosodic boundary influences the timing of gesture and intonation movements. Fifteen Catalan speakers pointed at a screen while pronouncing a target word with different metrical patterns in a contrastive focus condition and followed by a phrase boundary. A total of 702 co-speech deictic gestures were acoustically and gesturally analyzed. Intonation peaks and gesture apexes showed parallel behavior with respect to their position within the accented syllable: They occurred at the end of the accented syllable in non-phrase-final position, whereas they occurred well before the end of the accented syllable in phrase-final position. Crucially, the position of intonation peaks and gesture apexes was correlated and was bound by prosodic structure. The results refine the phonological synchronization rule (McNeill, 1992), showing that gesture apexes are anchored in intonation peaks and that gesture and prosodic movements are bound by prosodic phrasing.

  16. Studies in automatic speech recognition and its application in aerospace

    NASA Astrophysics Data System (ADS)

    Taylor, Michael Robinson

    Human communication is characterized in terms of the spectral and temporal dimensions of speech waveforms. Electronic speech recognition strategies based on Dynamic Time Warping and Markov Model algorithms are described and typical digit recognition error rates are tabulated. The application of Direct Voice Input (DVI) as an interface between man and machine is explored within the context of civil and military aerospace programmes. Sources of physical and emotional stress affecting speech production within military high performance aircraft are identified. Experimental results are reported which quantify fundamental frequency and coarse temporal dimensions of male speech as a function of the vibration, linear acceleration and noise levels typical of aerospace environments; preliminary indications of acoustic phonetic variability reported by other researchers are summarized. Connected whole-word pattern recognition error rates are presented for digits spoken under controlled Gz sinusoidal whole-body vibration. Correlations are made between significant increases in recognition error rate and resonance of the abdomen-thorax and head subsystems of the body. The phenomenon of vibrato style speech produced under low frequency whole-body Gz vibration is also examined. Interactive DVI system architectures and avionic data bus integration concepts are outlined together with design procedures for the efficient development of pilot-vehicle command and control protocols.

  17. Short- and long-term monitoring of underwater sound levels in the Hudson River (New York, USA).

    PubMed

    Martin, S Bruce; Popper, Arthur N

    2016-04-01

    There is a growing body of research on natural and man-made sounds that create aquatic soundscapes. Less is known about the soundscapes of shallow waters, such as in harbors, rivers, and lakes. Knowledge of soundscapes is needed as a baseline against which to determine the changes in noise levels resulting from human activities. To provide baseline data for the Hudson River at the site of the Tappan Zee Bridge, 12 acoustic data loggers were deployed for a 24-h period at ranges of 0-3000 m from the bridge, and four of the data loggers were re-deployed for three months of continuous recording. Results demonstrate that this region of the river is relatively quiet compared to open ocean conditions and other large river systems. Moreover, the soundscape had temporal and spatial diversity. The temporal patterns of underwater noise from the bridge change with the cadence of human activity. Bridge noise (e.g., road traffic) was only detected within 300 m; farther from the bridge, boating activity increased sound levels during the day, and especially on the weekend. Results also suggest that recording near the river bottom produced lower pseudo-noise levels than previous studies that recorded in the river water column.

  18. Supramodal processing optimizes visual perceptual learning and plasticity.

    PubMed

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations - here, global coherence levels across sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Variation analysis of transcriptome changes reveals cochlear genes and their associated functions in cochlear susceptibility to acoustic overstimulation.

    PubMed

    Yang, Shuzhi; Cai, Qunfeng; Bard, Jonathan; Jamison, Jennifer; Wang, Jianmin; Yang, Weiping; Hu, Bo Hua

    2015-12-01

    Individual variation in the susceptibility of the auditory system to acoustic overstimulation has been well-documented at both the functional and structural levels. However, the molecular mechanism responsible for this variation is unclear. The current investigation was designed to examine the variation patterns of cochlear gene expression using RNA-seq data and to identify the genes with expression variation that increased following acoustic trauma. This study revealed that the constitutive expressions of cochlear genes displayed diverse levels of gene-specific variation. These variation patterns were altered by acoustic trauma; approximately one-third of the examined genes displayed marked increases in their expression variation. Bioinformatics analyses revealed that the genes that exhibited increased variation were functionally related to cell death, biomolecule metabolism, and membrane function. In contrast, the stable genes were primarily related to basic cellular processes, including protein and macromolecular syntheses and transport. There was no functional overlap between the stable and variable genes. Importantly, we demonstrated that glutamate metabolism is related to the variation in the functional response of the cochlea to acoustic overstimulation. Taken together, the results indicate that our analyses of the individual variations in transcriptome changes of cochlear genes provide important information for the identification of genes that potentially contribute to the generation of individual variation in cochlear responses to acoustic overstimulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Dynamical Properties of Transient Spatio-Temporal Patterns in Bacterial Colony of Proteus mirabilis

    NASA Astrophysics Data System (ADS)

    Watanabe, Kazuhiko; Wakita, Jun-ichi; Itoh, Hiroto; Shimada, Hirotoshi; Kurosu, Sayuri; Ikeda, Takemasa; Yamazaki, Yoshihiro; Matsuyama, Tohey; Matsushita, Mitsugu

    2002-02-01

    Spatio-temporal patterns emerged inside a colony of bacterial species Proteus mirabilis on the surface of nutrient-rich semisolid agar medium have been investigated. We observed various patterns composed of the following basic types: propagating stripe, propagating stripe with fixed dislocation, expanding and shrinking target, and rotating spiral. The remarkable point is that the pattern changes immediately when we alter the position for observation, but it returns to the original if we restore the observing position within a few minutes. We further investigated mesoscopic and microscopic properties of the spatio-temporal patterns. It turned out that whenever the spatio-temporal patterns are observed in a colony, the areas are composed of two superimposed monolayers of elongated bacterial cells. In each area they are aligned almost parallel with each other like a two-dimensional nematic liquid crystal, and move collectively and independently of another layer. It has been found that the observed spatio-temporal patterns are explained as the moiré effect.

Top