TABLE D - WMO AND LOCAL (NCEP) DESCRIPTORS AS WELL AS THOSE AWAITING
sequences common to satellite observations None 3 05 Meteorological or hydrological sequences common to Vertical sounding sequences (conventional data) None 3 10 Vertical sounding sequences (satellite data) None (satellite data) None 3 13 Sequences common to image data None 3 14 Reserved None 3 15 Oceanographic report
What is that mysterious booming sound?
Hill, David P.
2011-01-01
The residents of coastal North Carolina are occasionally treated to sequences of booming sounds of unknown origin. The sounds are often energetic enough to rattle windows and doors. A recent sequence occurred in early January 2011 during clear weather with no evidence of local thunder storms. Queries by a local reporter (Colin Hackman of the NBC affiliate WETC in Wilmington, North Carolina, personal communication 2011) seemed to eliminate common anthropogenic sources such as sonic booms or quarry blasts. So the commonly asked question, “What's making these booming sounds?” remained (and remains) unanswered.
Video-assisted segmentation of speech and audio track
NASA Astrophysics Data System (ADS)
Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.
1999-08-01
Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.
Reprint of: Initial uncertainty impacts statistical learning in sound sequence processing.
Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel
2018-05-18
This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely uncertainty) may trigger value-based learning modulations even in task-irrelevant incidental learning. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Brain activation during anticipation of sound sequences.
Leaver, Amber M; Van Lare, Jennifer; Zielinski, Brandon; Halpern, Andrea R; Rauschecker, Josef P
2009-02-25
Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here, we study this predictive "anticipatory imagery" at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging. Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in "training" frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains.
Brain Activation During Anticipation of Sound Sequences
Leaver, Amber M.; Van Lare, Jennifer; Zielinski, Brandon; Halpern, Andrea R.; Rauschecker, Josef P.
2010-01-01
Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here we study this predictive “anticipatory imagery” at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging (fMRI). Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in “training” frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains. PMID:19244522
What is a melody? On the relationship between pitch and brightness of timbre.
Cousineau, Marion; Carcagno, Samuele; Demany, Laurent; Pressnitzer, Daniel
2013-01-01
Previous studies showed that the perceptual processing of sound sequences is more efficient when the sounds vary in pitch than when they vary in loudness. We show here that sequences of sounds varying in brightness of timbre are processed with the same efficiency as pitch sequences. The sounds used consisted of two simultaneous pure tones one octave apart, and the listeners' task was to make same/different judgments on pairs of sequences varying in length (one, two, or four sounds). In one condition, brightness of timbre was varied within the sequences by changing the relative level of the two pure tones. In other conditions, pitch was varied by changing fundamental frequency, or loudness was varied by changing the overall level. In all conditions, only two possible sounds could be used in a given sequence, and these two sounds were equally discriminable. When sequence length increased from one to four, discrimination performance decreased substantially for loudness sequences, but to a smaller extent for brightness sequences and pitch sequences. In the latter two conditions, sequence length had a similar effect on performance. These results suggest that the processes dedicated to pitch and brightness analysis, when probed with a sequence-discrimination task, share unexpected similarities.
Derrick, Donald; Stavness, Ian; Gick, Bryan
2015-03-01
The assumption that units of speech production bear a one-to-one relationship to speech motor actions pervades otherwise widely varying theories of speech motor behavior. This speech production and simulation study demonstrates that commonly occurring flap sequences may violate this assumption. In the word "Saturday," a sequence of three sounds may be produced using a single, cyclic motor action. Under this view, the initial upward tongue tip motion, starting with the first vowel and moving to contact the hard palate on the way to a retroflex position, is under active muscular control, while the downward movement of the tongue tip, including the second contact with the hard palate, results from gravity and elasticity during tongue muscle relaxation. This sequence is reproduced using a three-dimensional computer simulation of human vocal tract biomechanics and differs greatly from other observed sequences for the same word, which employ multiple targeted speech motor actions. This outcome suggests that a goal of a speaker is to produce an entire sequence in a biomechanically efficient way at the expense of maintaining parity within the individual parts of the sequence.
What is a melody? On the relationship between pitch and brightness of timbre
Cousineau, Marion; Carcagno, Samuele; Demany, Laurent; Pressnitzer, Daniel
2014-01-01
Previous studies showed that the perceptual processing of sound sequences is more efficient when the sounds vary in pitch than when they vary in loudness. We show here that sequences of sounds varying in brightness of timbre are processed with the same efficiency as pitch sequences. The sounds used consisted of two simultaneous pure tones one octave apart, and the listeners’ task was to make same/different judgments on pairs of sequences varying in length (one, two, or four sounds). In one condition, brightness of timbre was varied within the sequences by changing the relative level of the two pure tones. In other conditions, pitch was varied by changing fundamental frequency, or loudness was varied by changing the overall level. In all conditions, only two possible sounds could be used in a given sequence, and these two sounds were equally discriminable. When sequence length increased from one to four, discrimination performance decreased substantially for loudness sequences, but to a smaller extent for brightness sequences and pitch sequences. In the latter two conditions, sequence length had a similar effect on performance. These results suggest that the processes dedicated to pitch and brightness analysis, when probed with a sequence-discrimination task, share unexpected similarities. PMID:24478638
ERIC Educational Resources Information Center
Van Strien, Jan W.
2004-01-01
To investigate whether concurrent nonverbal sound sequences would affect visual-hemifield lexical processing, lexical-decision performance of 24 strongly right-handed students (12 men, 12 women) was measured in three conditions: baseline, concurrent neutral sound sequence, and concurrent emotional sound sequence. With the neutral sequence,…
Kuroda, Tsuyoshi; Tomimatsu, Erika; Grondin, Simon; Miyazaki, Makoto
2016-11-01
We investigated how perceived duration of empty time intervals would be modulated by the length of sounds marking those intervals. Three sounds were successively presented in Experiment 1. Each sound was short (S) or long (L), and the temporal position of the middle sound's onset was varied. The lengthening of each sound resulted in delayed perception of the onset; thus, the middle sound's onset had to be presented earlier in the SLS than in the LSL sequence so that participants perceived the three sounds as presented at equal interonset intervals. In Experiment 2, a short sound and a long sound were alternated repeatedly, and the relative duration of the SL interval to the LS interval was varied. This repeated sequence was perceived as consisting of equal interonset intervals when the onsets of all sounds were aligned at physically equal intervals. If the same onset delay as in the preceding experiment had occurred, participants should have perceived equality between the interonset intervals in the repeated sequence when the SL interval was physically shortened relative to the LS interval. The effects of sound length seemed to be canceled out when the presentation of intervals was repeated. Finally, the perceived duration of the interonset intervals in the repeated sequence was not influenced by whether the participant's native language was French or Japanese, or by how the repeated sequence was perceptually segmented into rhythmic groups.
Song copying by humpback whales: themes and variations.
Mercado, Eduardo; Herman, Louis M; Pack, Adam A
2005-04-01
Male humpback whales (Megaptera novaeangliae) produce long, structured sequences of sound underwater, commonly called "songs." Humpbacks progressively modify their songs over time in ways that suggest that individuals are copying song elements that they hear being used by other singers. Little is known about the factors that determine how whales learn from their auditory experiences. Song learning in birds is better understood and appears to be constrained by stable core attributes such as species-specific sound repertoires and song syntax. To clarify whether similar constraints exist for song learning by humpbacks, we analyzed changes over 14 years in the sounds used by humpback whales singing in Hawaiian waters. We found that although the properties of individual sounds within songs are quite variable over time, the overall distribution of certain acoustic features within the repertoire appears to be stable. In particular, our findings suggest that species-specific constraints on temporal features of song sounds determine song form, whereas spectral variability allows whales to flexibly adapt song elements.
Ultrasound biofeedback treatment for persisting childhood apraxia of speech.
Preston, Jonathan L; Brick, Nickole; Landi, Nicole
2013-11-01
The purpose of this study was to evaluate the efficacy of a treatment program that includes ultrasound biofeedback for children with persisting speech sound errors associated with childhood apraxia of speech (CAS). Six children ages 9-15 years participated in a multiple baseline experiment for 18 treatment sessions during which treatment focused on producing sequences involving lingual sounds. Children were cued to modify their tongue movements using visual feedback from real-time ultrasound images. Probe data were collected before, during, and after treatment to assess word-level accuracy for treated and untreated sound sequences. As participants reached preestablished performance criteria, new sequences were introduced into treatment. All participants met the performance criterion (80% accuracy for 2 consecutive sessions) on at least 2 treated sound sequences. Across the 6 participants, performance criterion was met for 23 of 31 treated sequences in an average of 5 sessions. Some participants showed no improvement in untreated sequences, whereas others showed generalization to untreated sequences that were phonetically similar to the treated sequences. Most gains were maintained 2 months after the end of treatment. The percentage of phonemes correct increased significantly from pretreatment to the 2-month follow-up. A treatment program including ultrasound biofeedback is a viable option for improving speech sound accuracy in children with persisting speech sound errors associated with CAS.
Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo
2010-01-01
The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.
ENGLISH ORTHOGRAPHY--ITS GRAPHICAL STRUCTURE AND ITS RELATION TO SOUND.
ERIC Educational Resources Information Center
VENEZKY, RICHARD L.
SETS OF ORTHOGRAPHIC PATTERNS BASED ON AN ANALYSIS OF THE SPELLINGS AND PRONUNCIATIONS OF THE 20,000 MOST COMMON ENGLISH WORDS ARE ORGANIZED AND PRESENTED. TWO BASIC SETS OF PATTERNS ARE DISCUSSED. THE FIRST PERTAINS TO THE INTERNAL STRUCTURE OF THE ORTHOGRAPHY--THE CLASSES OF LETTERS (GRAPHEMES) AND THE ALLOWABLE SEQUENCES OF THESE LETTERS…
Tracing the neural basis of auditory entrainment.
Lehmann, Alexandre; Arias, Diana Jimena; Schönwiesner, Marc
2016-11-19
Neurons in the auditory cortex synchronize their responses to temporal regularities in sound input. This coupling or "entrainment" is thought to facilitate beat extraction and rhythm perception in temporally structured sounds, such as music. As a consequence of such entrainment, the auditory cortex responds to an omitted (silent) sound in a regular sequence. Although previous studies suggest that the auditory brainstem frequency-following response (FFR) exhibits some of the beat-related effects found in the cortex, it is unknown whether omissions of sounds evoke a brainstem response. We simultaneously recorded cortical and brainstem responses to isochronous and irregular sequences of consonant-vowel syllable /da/ that contained sporadic omissions. The auditory cortex responded strongly to omissions, but we found no evidence of evoked responses to omitted stimuli from the auditory brainstem. However, auditory brainstem responses in the isochronous sound sequence were more consistent across trials than in the irregular sequence. These results indicate that the auditory brainstem faithfully encodes short-term acoustic properties of a stimulus and is sensitive to sequence regularity, but does not entrain to isochronous sequences sufficiently to generate overt omission responses, even for sequences that evoke such responses in the cortex. These findings add to our understanding of the processing of sound regularities, which is an important aspect of human cognitive abilities like rhythm, music and speech perception. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Improving the Reliability of Tinnitus Screening in Laboratory Animals.
Jones, Aikeen; May, Bradford J
2017-02-01
Behavioral screening remains a contentious issue for animal studies of tinnitus. Most paradigms base a positive tinnitus test on an animal's natural tendency to respond to the "sound" of tinnitus as if it were an actual sound. As a result, animals with tinnitus are expected to display sound-conditioned behaviors when no sound is present or to miss gaps in background sounds because tinnitus "fills in the gap." Reliable confirmation of the behavioral indications of tinnitus can be problematic because the reinforcement contingencies of conventional discrimination tasks break down an animal's tendency to group tinnitus with sound. When responses in silence are rewarded, animals respond in silence regardless of their tinnitus status. When responses in silence are punished, animals stop responding. This study introduces stimulus classification as an alternative approach to tinnitus screening. Classification procedures train animals to respond to the common perceptual features that define a group of sounds (e.g., high pitch or narrow bandwidth). Our procedure trains animals to drink when they hear tinnitus and to suppress drinking when they hear other sounds. Animals with tinnitus are revealed by their tendency to drink in the presence of unreinforced probe sounds that share the perceptual features of the tinnitus classification. The advantages of this approach are illustrated by taking laboratory rats through a testing sequence that includes classification training, the experimental induction of tinnitus, and postinduction screening. Behavioral indications of tinnitus are interpreted and then verified by simulating a known tinnitus percept with objective sounds.
Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C
2017-09-01
In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.
Songbirds and humans apply different strategies in a sound sequence discrimination task.
Seki, Yoshimasa; Suzuki, Kenta; Osawa, Ayumi M; Okanoya, Kazuo
2013-01-01
The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an "AAB" or "ABB" rule. The sound elements used were taken from a variety of male (M) and female (F) calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: (1) memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and (2) using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e., AAB and ABB); MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.
The description of cough sounds by healthcare professionals
Smith, Jaclyn A; Ashurst, H Louise; Jack, Sandy; Woodcock, Ashley A; Earis, John E
2006-01-01
Background Little is known of the language healthcare professionals use to describe cough sounds. We aimed to examine how they describe cough sounds and to assess whether these descriptions suggested they appreciate the basic sound qualities (as assessed by acoustic analysis) and the underlying diagnosis of the patient coughing. Methods 53 health professionals from two large respiratory tertiary referral centres were recruited; 22 doctors and 31 staff from professions allied to medicine. Participants listened to 9 sequences of spontaneous cough sounds from common respiratory diseases. For each cough they selected patient gender, the most appropriate descriptors and a diagnosis. Cluster analysis was performed to assess which cough sounds attracted similar descriptions. Results Gender was correctly identified in 93% of cases. The presence or absence of mucus was correct in 76.1% and wheeze in 39.3% of cases. However, identifying clinical diagnosis from cough was poor at 34.0%. Cluster analysis showed coughs with the same acoustics properties rather than the same diagnoses attracted the same descriptions. Conclusion These results suggest that healthcare professionals can recognise some of the qualities of cough sounds but are poor at making diagnoses from them. It remains to be seen whether in the future cough sound acoustics will provide useful clinical information and whether their study will lead to the development of useful new outcome measures in cough monitoring. PMID:16436200
Speech processing using maximum likelihood continuity mapping
Hogden, John E.
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.E.
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
NASA Astrophysics Data System (ADS)
Leek, Marjorie R.; Neff, Donna L.
2004-05-01
Charles Watson's studies of informational masking and the effects of stimulus uncertainty on auditory perception have had a profound impact on auditory research. His series of seminal studies in the mid-1970s on the detection and discrimination of target sounds in sequences of brief tones with uncertain properties addresses the fundamental problem of extracting target signals from background sounds. As conceptualized by Chuck and others, informational masking results from more central (even ``cogneetive'') processes as a consequence of stimulus uncertainty, and can be distinguished from ``energetic'' masking, which primarily arises from the auditory periphery. Informational masking techniques are now in common use to study the detection, discrimination, and recognition of complex sounds, the capacity of auditory memory and aspects of auditory selective attention, the often large effects of training to reduce detrimental effects of uncertainty, and the perceptual segregation of target sounds from irrelevant context sounds. This paper will present an overview of past and current research on informational masking, and show how Chuck's work has been expanded in several directions by other scientists to include the effects of informational masking on speech perception and on perception by listeners with hearing impairment. [Work supported by NIDCD.
Synchronized tapping facilitates learning sound sequences as indexed by the P300.
Kamiyama, Keiko S; Okanoya, Kazuo
2014-01-01
The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals' musical ability to coordinate their finger movements along with external auditory events.
Synchronized tapping facilitates learning sound sequences as indexed by the P300
Kamiyama, Keiko S.; Okanoya, Kazuo
2014-01-01
The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals’ musical ability to coordinate their finger movements along with external auditory events. PMID:25400564
CNTNAP2 Is Significantly Associated With Speech Sound Disorder in the Chinese Han Population.
Zhao, Yun-Jing; Wang, Yue-Ping; Yang, Wen-Zhu; Sun, Hong-Wei; Ma, Hong-Wei; Zhao, Ya-Ru
2015-11-01
Speech sound disorder is the most common communication disorder. Some investigations support the possibility that the CNTNAP2 gene might be involved in the pathogenesis of speech-related diseases. To investigate single-nucleotide polymorphisms in the CNTNAP2 gene, 300 unrelated speech sound disorder patients and 200 normal controls were included in the study. Five single-nucleotide polymorphisms were amplified and directly sequenced. Significant differences were found in the genotype (P = .0003) and allele (P = .0056) frequencies of rs2538976 between patients and controls. The excess frequency of the A allele in the patient group remained significant after Bonferroni correction (P = .0280). A significant haplotype association with rs2710102T/+rs17236239A/+2538976A/+2710117A (P = 4.10e-006) was identified. A neighboring single-nucleotide polymorphism, rs10608123, was found in complete linkage disequilibrium with rs2538976, and the genotypes exactly corresponded to each other. The authors propose that these CNTNAP2 variants increase the susceptibility to speech sound disorder. The single-nucleotide polymorphisms rs10608123 and rs2538976 may merge into one single-nucleotide polymorphism. © The Author(s) 2015.
Loudon, R G
1987-06-01
Accurate diagnosis is essential for effective treatment. After history-taking, the physical examination is second in importance in assessing a pulmonary patient. The time-honored sequence of inspection, palpation, percussion, and auscultation is appropriate. Diagnostic tests are becoming more complex, more expensive, and more inclined to separate the patient and physician. The stethoscope is still the more commonly used diagnostic medical instrument, but it is not always used to best advantage. It is familiar, harmless, portable, and inexpensive. Its appropriate use improves medical practice and reduces costs. Improvements in sound recording and analysis techniques have spurred a renewed interest in lung sounds and their meaning. This is likely to lead to better understanding of what we hear, and perhaps to the development of new noninvasive diagnostic and monitoring techniques.
Not all carp are created equal: Impacts of broadband sound on common carp swimming behavior
Murchy, Kelsie; Vetter, Brooke J.; Brey, Marybeth; Amberg, Jon J.; Gaikowski, Mark; Mensinger, Allen F.
2016-01-01
Bighead carp (Hypophthalmichthys nobilis), silver carp (H. molitrix) (hereafter: bigheaded carps), and common carp (Cyprinus carpio) are invasive fish causing negative impacts throughout their North American range. To control their movements, non-physical barriers are being developed. Broadband sound (0.06 to 10 kHz) has shown potential as an acoustic deterrent for bigheaded carps, but the response of common carp to broadband sound has not been evaluated. Since common carp are ostariophysians, possessing Weberian ossicles similar to bigheaded carps, it is possible that sound can be used as an acoustical deterrent for all three species. Behavioral responses to a broadband sound were evaluated for common carp in an outdoor concrete pond. Common carp responded a median of 3.0 (1st Q: 1.0, 3rd Q: 6.0) consecutive times to the broadband sound which was lower than silver carp and bighead carp to the same stimulus. The current study shows that common carp demonstrate an inconsistent negative phonotaxis response to a broadband sound, and seem to habituate to the sound quickly.
How Modality Specific Is the Iambic-Trochaic Law? Evidence from Vision
ERIC Educational Resources Information Center
Pena, Marcela; Bion, Ricardo A. H.; Nespor, Marina
2011-01-01
The iambic-trochaic law has been proposed to account for the grouping of auditory stimuli: Sequences of sounds that differ only in duration are grouped as iambs (i.e., the most prominent element marks the end of a sequence of sounds), and sequences that differ only in pitch or intensity are grouped as trochees (i.e., the most prominent element…
Snyder, Joel S; Weintraub, David M
2013-07-01
An important question is the extent to which declines in memory over time are due to passive loss or active interference from other stimuli. The purpose of the present study was to determine the extent to which implicit memory effects in the perceptual organization of sound sequences are subject to loss and interference. Toward this aim, we took advantage of two recently discovered context effects in the perceptual judgments of sound patterns, one that depends on stimulus features of previous sounds and one that depends on the previous perceptual organization of these sounds. The experiments measured how listeners' perceptual organization of a tone sequence (test) was influenced by the frequency separation, or the perceptual organization, of the two preceding sequences (context1 and context2). The results demonstrated clear evidence for loss of context effects over time but little evidence for interference. However, they also revealed that context effects can be surprisingly persistent. The robust effects of loss, followed by persistence, were similar for the two types of context effects. We discuss whether the same auditory memories might contain information about basic stimulus features of sounds (i.e., frequency separation), as well as the perceptual organization of these sounds.
Listen up! Processing of intensity change differs for vocal and nonvocal sounds.
Schirmer, Annett; Simpson, Elizabeth; Escoffier, Nicolas
2007-10-24
Changes in the intensity of both vocal and nonvocal sounds can be emotionally relevant. However, as only vocal sounds directly reflect communicative intent, intensity change of vocal but not nonvocal sounds is socially relevant. Here we investigated whether a change in sound intensity is processed differently depending on its social relevance. To this end, participants listened passively to a sequence of vocal or nonvocal sounds that contained rare deviants which differed from standards in sound intensity. Concurrently recorded event-related potentials (ERPs) revealed a mismatch negativity (MMN) and P300 effect for intensity change. Direction of intensity change was of little importance for vocal stimulus sequences, which recruited enhanced sensory and attentional resources for both loud and soft deviants. In contrast, intensity change in nonvocal sequences recruited more sensory and attentional resources for loud as compared to soft deviants. This was reflected in markedly larger MMN/P300 amplitudes and shorter P300 latencies for the loud as compared to soft nonvocal deviants. Furthermore, while the processing pattern observed for nonvocal sounds was largely comparable between men and women, sex differences for vocal sounds suggest that women were more sensitive to their social relevance. These findings extend previous evidence of sex differences in vocal processing and add to reports of voice specific processing mechanisms by demonstrating that simple acoustic change recruits more processing resources if it is socially relevant.
Earthquake forecasting during the complex Amatrice-Norcia seismic sequence
Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe
2017-01-01
Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016–2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences. PMID:28924610
Earthquake forecasting during the complex Amatrice-Norcia seismic sequence.
Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe
2017-09-01
Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016-2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences.
Exploring the role of auditory analysis in atypical compared to typical language development.
Grube, Manon; Cooper, Freya E; Kumar, Sukhbinder; Kelly, Tom; Griffiths, Timothy D
2014-02-01
The relationship between auditory processing and language skills has been debated for decades. Previous findings have been inconsistent, both in typically developing and impaired subjects, including those with dyslexia or specific language impairment. Whether correlations between auditory and language skills are consistent between different populations has hardly been addressed at all. The present work presents an exploratory approach of testing for patterns of correlations in a range of measures of auditory processing. In a recent study, we reported findings from a large cohort of eleven-year olds on a range of auditory measures and the data supported a specific role for the processing of short sequences in pitch and time in typical language development. Here we tested whether a group of individuals with dyslexic traits (DT group; n = 28) from the same year group would show the same pattern of correlations between auditory and language skills as the typically developing group (TD group; n = 173). Regarding the raw scores, the DT group showed a significantly poorer performance on the language but not the auditory measures, including measures of pitch, time and rhythm, and timbre (modulation). In terms of correlations, there was a tendency to decrease in correlations between short-sequence processing and language skills, contrasted by a significant increase in correlation for basic, single-sound processing, in particular in the domain of modulation. The data support the notion that the fundamental relationship between auditory and language skills might differ in atypical compared to typical language development, with the implication that merging data or drawing inference between populations might be problematic. Further examination of the relationship between both basic sound feature analysis and music-like sound analysis and language skills in impaired populations might allow the development of appropriate training strategies. These might include types of musical training to augment language skills via their common bases in sound sequence analysis. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Dynamics of snoring sounds and its connection with obstructive sleep apnea
NASA Astrophysics Data System (ADS)
Alencar, Adriano M.; da Silva, Diego Greatti Vaz; Oliveira, Carolina Beatriz; Vieira, André P.; Moriya, Henrique T.; Lorenzi-Filho, Geraldo
2013-01-01
Snoring is extremely common in the general population and when irregular may indicate the presence of obstructive sleep apnea. We analyze the overnight sequence of wave packets - the snore sound - recorded during full polysomnography in patients referred to the Sleep Laboratory due to suspected obstructive sleep apnea. We hypothesize that irregular snore, with duration in the range between 10 and 100 s, correlates with respiratory obstructive events. We find that the number of irregular snores - easily accessible, and quantified by what we call the snore time interval index (STII) - is in good agreement with the well-known apnea-hypopnea index, which expresses the severity of obstructive sleep apnea and is extracted only from polysomnography. In addition, the Hurst analysis of the snore sound itself, which calculates the fluctuations in the signal as a function of time interval, is used to build a classifier that is able to distinguish between patients with no or mild apnea and patients with moderate or severe apnea.
Cognitive Control of Involuntary Distraction by Deviant Sounds
ERIC Educational Resources Information Center
Parmentier, Fabrice B. R.; Hebrero, Maria
2013-01-01
It is well established that a task-irrelevant sound (deviant sound) departing from an otherwise repetitive sequence of sounds (standard sounds) elicits an involuntary capture of attention and orienting response toward the deviant stimulus, resulting in the lengthening of response times in an ongoing task. Some have argued that this type of…
Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian
2016-01-01
Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791
Implicit sequence learning in deaf children with cochlear implants.
Conway, Christopher M; Pisoni, David B; Anaya, Esperanza M; Karpicke, Jennifer; Henning, Shirley C
2011-01-01
Deaf children with cochlear implants (CIs) represent an intriguing opportunity to study neurocognitive plasticity and reorganization when sound is introduced following a period of auditory deprivation early in development. Although it is common to consider deafness as affecting hearing alone, it may be the case that auditory deprivation leads to more global changes in neurocognitive function. In this paper, we investigate implicit sequence learning abilities in deaf children with CIs using a novel task that measured learning through improvement to immediate serial recall for statistically consistent visual sequences. The results demonstrated two key findings. First, the deaf children with CIs showed disturbances in their visual sequence learning abilities relative to the typically developing normal-hearing children. Second, sequence learning was significantly correlated with a standardized measure of language outcome in the CI children. These findings suggest that a period of auditory deprivation has secondary effects related to general sequencing deficits, and that disturbances in sequence learning may at least partially explain why some deaf children still struggle with language following cochlear implantation. © 2010 Blackwell Publishing Ltd.
The Effects of Phonetic Similarity and List Length on Children's Sound Categorization Performance.
ERIC Educational Resources Information Center
Snowling, Margaret J.; And Others
1994-01-01
Examined the phonological analysis and verbal working memory components of the sound categorization task and their relationships to reading skill differences. Children were tested on sound categorization by having them identify odd words in sequences. Sound categorization performance was sensitive to individual differences in speech perception…
Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.
Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi
2017-07-01
We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.
Auditory sequence analysis and phonological skill
Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.
2012-01-01
This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739
Soma, Masayo; Mori, Chihiro
2015-01-01
Music and dance are two remarkable human characteristics that are closely related. Communication through integrated vocal and motional signals is also common in the courtship displays of birds. The contribution of songbird studies to our understanding of vocal learning has already shed some light on the cognitive underpinnings of musical ability. Moreover, recent pioneering research has begun to show how animals can synchronize their behaviors with external stimuli, like metronome beats. However, few studies have applied such perspectives to unraveling how animals can integrate multimodal communicative signals that have natural functions. Additionally, studies have rarely asked how well these behaviors are learned. With this in mind, here we cast a spotlight on an unusual animal behavior: non-vocal sound production associated with singing in the Java sparrow (Lonchura oryzivora), a songbird. We show that male Java sparrows coordinate their bill-click sounds with the syntax of their song-note sequences, similar to percussionists. Analysis showed that they produced clicks frequently toward the beginning of songs and before/after specific song notes. We also show that bill-clicking patterns are similar between social fathers and their sons, suggesting that these behaviors might be learned from models or linked to learning-based vocalizations. Individuals untutored by conspecifics also exhibited stereotypical bill-clicking patterns in relation to song-note sequence, indicating that while the production of bill clicking itself is intrinsic, its syncopation appears to develop with songs. This paints an intriguing picture in which non-vocal sounds are integrated with vocal courtship signals in a songbird, a model that we expect will contribute to the further understanding of multimodal communication. PMID:25992841
Musical training increases functional connectivity, but does not enhance mu suppression.
Wu, C Carolyn; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J
2017-09-01
Musical training provides an ideal platform for investigating action representation for sound. Learning to play an instrument requires integration of sensory and motor perception-action processes. Functional neuroimaging studies have indicated that listening to trained music can result in the activity in premotor areas, even after a short period of training. These studies suggest that action representation systems are heavily dependent on specific sensorimotor experience. However, others suggest that because humans naturally move to music, sensorimotor training is not necessary and there is a more general action representation for music. We previously demonstrated that EEG mu suppression, commonly implemented to demonstrate mirror-neuron-like action representation while observing movements, can also index action representations for sounds in pianists. The current study extends these findings to a group of non-musicians who learned to play randomised sequences on a piano, in order to acquire specific sound-action mappings for the five fingers of their right hand. We investigated training-related changes in neural dynamics as indexed by mu suppression and task-related coherence measures. To test the specificity of training effects, we included sounds similar to those encountered in the training and additionally rhythm sequences. We found no effect of training on mu suppression between pre- and post-training EEG recordings. However, task-related coherence indexing functional connectivity between electrodes over audiomotor areas increased after training. These results suggest that long-term training in musicians and short-term training in novices may be associated with different stages of audiomotor integration that can be reflected in different EEG measures. Furthermore, the changes in functional connectivity were specifically found for piano tones, and were not apparent when participants listened to rhythms, indicating some degree of specificity related to training. Copyright © 2017 Elsevier Ltd. All rights reserved.
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.
Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion
2017-07-01
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.
ERIC Educational Resources Information Center
Peter, Beate; Raskind, Wendy H.
2011-01-01
Purpose: To evaluate phenotypic expressions of speech sound disorder (SSD) in multigenerational families with evidence of familial forms of SSD. Method: Members of five multigenerational families (N = 36) produced rapid sequences of monosyllables and disyllables and tapped computer keys with repetitive and alternating movements. Results: Measures…
Implications of diadochokinesia in children with speech sound disorder.
Wertzner, Haydée Fiszbein; Pagan-Neves, Luciana de Oliveira; Alves, Renata Ramos; Barrozo, Tatiane Faria
2013-01-01
To verify the performance of children with and without speech sound disorder in oral motor skills measured by oral diadochokinesia according to age and gender and to compare the results by two different methods of analysis. Participants were 72 subjects aged from 5 years to 7 years and 11 months divided into four subgroups according to the presence of speech sound disorder (Study Group and Control Group) and age (<6 years and 5 months and >6 years and 5 months). Diadochokinesia skills were assessed by the repetition of the sequences 'pa', 'ta', 'ka' and 'pataka' measured both manually and by the software Motor Speech Profile®. Gender was statistically different for both groups but it did not influence on the number of sequences per second produced. Correlation between the number of sequences per second and age was observed for all sequences (except for 'ka') only for the control group children. Comparison between groups did not indicate differences between the number of sequences per second and age. Results presented strong agreement between the values of oral diadochokinesia measured manually and by MSP. This research demonstrated the importance of using different methods of analysis on the functional evaluation of oro-motor processing aspects of children with speech sound disorder and evidenced the oro-motor difficulties on children aged under than eight years old.
NASA Technical Reports Server (NTRS)
Platt, R.
1999-01-01
This is the Performance Verification Report, Initial Comprehensive Performance Test Report, P/N 1331200-2-IT, S/N 105/A2, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). The specification establishes the requirements for the Comprehensive Performance Test (CPT) and Limited Performance Test (LPT) of the Advanced Microwave Sounding, Unit-A2 (AMSU-A2), referred to herein as the unit. The unit is defined on Drawing 1331200. 1.2 Test procedure sequence. The sequence in which the several phases of this test procedure shall take place is shown in Figure 1, but the sequence can be in any order.
NASA Astrophysics Data System (ADS)
West, Eva; Wallin, Anita
2013-04-01
Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.
Analysis of swallowing sounds using hidden Markov models.
Aboofazeli, Mohammad; Moussavi, Zahra
2008-04-01
In recent years, acoustical analysis of the swallowing mechanism has received considerable attention due to its diagnostic potentials. This paper presents a hidden Markov model (HMM) based method for the swallowing sound segmentation and classification. Swallowing sound signals of 15 healthy and 11 dysphagic subjects were studied. The signals were divided into sequences of 25 ms segments each of which were represented by seven features. The sequences of features were modeled by HMMs. Trained HMMs were used for segmentation of the swallowing sounds into three distinct phases, i.e., initial quiet period, initial discrete sounds (IDS) and bolus transit sounds (BTS). Among the seven features, accuracy of segmentation by the HMM based on multi-scale product of wavelet coefficients was higher than that of the other HMMs and the linear prediction coefficient (LPC)-based HMM showed the weakest performance. In addition, HMMs were used for classification of the swallowing sounds of healthy subjects and dysphagic patients. Classification accuracy of different HMM configurations was investigated. When we increased the number of states of the HMMs from 4 to 8, the classification error gradually decreased. In most cases, classification error for N=9 was higher than that of N=8. Among the seven features used, root mean square (RMS) and waveform fractal dimension (WFD) showed the best performance in the HMM-based classification of swallowing sounds. When the sequences of the features of IDS segment were modeled separately, the accuracy reached up to 85.5%. As a second stage classification, a screening algorithm was used which correctly classified all the subjects but one healthy subject when RMS was used as characteristic feature of the swallowing sounds and the number of states was set to N=8.
ERIC Educational Resources Information Center
Kudoh, Masaharu; Shibuki, Katsuei
2006-01-01
We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…
Andreeva, I G; Vartanian, I A
2012-01-01
The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.
Accuracy of abdominal auscultation for bowel obstruction.
Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge
2015-09-14
To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann(®) Electronic Stethoscope. The recordings were processed to yield 25-s sound sequences in random order on PCs. Observers, recruited from doctors within the department, classified the sound sequences as either normal or pathological. The reference tests for bowel obstruction were intraoperative and endoscopic findings and clinical follow up. Sensitivity and specificity were calculated for each observer and compared between junior and senior doctors. Interobserver variation was measured using the Kappa statistic. Bowel sound sequences from 98 patients were assessed by 53 (33 junior and 20 senior) doctors. Laparotomy was performed in 47 patients, 35 of whom had bowel obstruction. Two patients underwent colorectal stenting due to large bowel obstruction. The median sensitivity and specificity was 0.42 (range: 0.19-0.64) and 0.78 (range: 0.35-0.98), respectively. There was no significant difference in accuracy between junior and senior doctors. The median frequency with which doctors classified bowel sounds as abnormal did not differ significantly between patients with and without bowel obstruction (26% vs 23%, P = 0.08). The 53 doctors made up 1378 unique pairs and the median Kappa value was 0.29 (range: -0.15-0.66). Accuracy and inter-observer agreement was generally low. Clinical decisions in patients with possible bowel obstruction should not be based on auscultatory assessment of bowel sounds.
Kim, Yoon Jae; Kim, Yoon Young
2010-10-01
This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.
Accuracy of abdominal auscultation for bowel obstruction
Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge
2015-01-01
AIM: To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. METHODS: Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann® Electronic Stethoscope. The recordings were processed to yield 25-s sound sequences in random order on PCs. Observers, recruited from doctors within the department, classified the sound sequences as either normal or pathological. The reference tests for bowel obstruction were intraoperative and endoscopic findings and clinical follow up. Sensitivity and specificity were calculated for each observer and compared between junior and senior doctors. Interobserver variation was measured using the Kappa statistic. RESULTS: Bowel sound sequences from 98 patients were assessed by 53 (33 junior and 20 senior) doctors. Laparotomy was performed in 47 patients, 35 of whom had bowel obstruction. Two patients underwent colorectal stenting due to large bowel obstruction. The median sensitivity and specificity was 0.42 (range: 0.19-0.64) and 0.78 (range: 0.35-0.98), respectively. There was no significant difference in accuracy between junior and senior doctors. The median frequency with which doctors classified bowel sounds as abnormal did not differ significantly between patients with and without bowel obstruction (26% vs 23%, P = 0.08). The 53 doctors made up 1378 unique pairs and the median Kappa value was 0.29 (range: -0.15-0.66). CONCLUSION: Accuracy and inter-observer agreement was generally low. Clinical decisions in patients with possible bowel obstruction should not be based on auscultatory assessment of bowel sounds. PMID:26379407
Human sensorimotor tracking of continuous subliminal deviations from isochrony.
Madison, Guy; Merker, Björn
2004-11-03
We show that people continuously react to time perturbations in the range 3-96 ms in otherwise isochronous sound sequences. Musically trained and untrained participants were asked to synchronize with a sequence of sounds, and these two groups performed almost equally below the threshold for conscious detection of the perturbations. Above this threshold the motor reactions accounted for a larger proportion of the stimulus deviations in musically trained participants.
Dufour, Valérie; Pasquaretta, Cristian; Gayet, Pierre; Sterck, Elisabeth H M
2017-01-01
In a previous study (Dufour et al., 2015) we reported the unusual characteristics of the drumming performance of a chimpanzee named Barney. His sound production, several sequences of repeated drumming on an up-turned plastic barrel, shared features typical for human musical drumming: it was rhythmical, decontextualized, and well controlled by the chimpanzee. This type of performance raises questions about the origins of our musicality. Here we recorded spontaneously occurring events of sound production with objects in Barney's colony. First we collected data on the duration of sound making. Here we examined whether (i) the context in which objects were used for sound production, (ii) the sex of the producer, (iii) the medium, and (iv) the technique used for sound production had any effect on the duration of sound making. Interestingly, duration of drumming differed across contexts, sex, and techniques. Then we filmed as many events as possible to increase our chances of recording sequences that would be musically similar to Barney's performance in the original study. We filmed several long productions that were rhythmically interesting. However, none fully met the criteria of musical sound production, as previously reported for Barney.
Dufour, Valérie; Pasquaretta, Cristian; Gayet, Pierre; Sterck, Elisabeth H. M.
2017-01-01
In a previous study (Dufour et al., 2015) we reported the unusual characteristics of the drumming performance of a chimpanzee named Barney. His sound production, several sequences of repeated drumming on an up-turned plastic barrel, shared features typical for human musical drumming: it was rhythmical, decontextualized, and well controlled by the chimpanzee. This type of performance raises questions about the origins of our musicality. Here we recorded spontaneously occurring events of sound production with objects in Barney's colony. First we collected data on the duration of sound making. Here we examined whether (i) the context in which objects were used for sound production, (ii) the sex of the producer, (iii) the medium, and (iv) the technique used for sound production had any effect on the duration of sound making. Interestingly, duration of drumming differed across contexts, sex, and techniques. Then we filmed as many events as possible to increase our chances of recording sequences that would be musically similar to Barney's performance in the original study. We filmed several long productions that were rhythmically interesting. However, none fully met the criteria of musical sound production, as previously reported for Barney. PMID:28154521
Temporal plasticity in auditory cortex improves neural discrimination of speech sounds
Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.
2017-01-01
Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520
Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review
Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi
2015-01-01
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827
Thompson, P O; Findley, L T; Vidal, O
1992-12-01
Low-frequency vocalizations were recorded from fin whales, Balaenoptera physalus, in the Gulf of California, Mexico, during three cruises. In March 1985, recorded 20-Hz pulses were in sequences of regular 9-s interpulse intervals. In August 1987, nearly all were in sequences of doublets with alternating 5- and 18-s interpulse intervals. No 20-Hz pulse sequences of any kind were detected in February 1987. The typical pulse modulated from 42 to 20 Hz and its median duration was 0.7 s (1985 data). Most other fin whale sounds were also short tonal pulses averaging 82, 56, and 68 Hz, respectively, for the three cruises; 89% were modulated in frequency, mostly downward. Compared to Atlantic and Pacific Ocean regions, Gulf of California 20-Hz pulses were unique in terms of frequency modulation, interpulse sound levels, and temporal patterns. Fin whales in the Gulf may represent a regional stock revealed by their sound characteristics, a phenomenon previously shown for humpback whales, birds, and fish. Regional differences in fin whale sounds were found in comparisons of Atlantic and Pacific locations.
Acoustic analysis in Mudejar-Gothic churches: Experimental results
NASA Astrophysics Data System (ADS)
Galindo, Miguel; Zamarreño, Teófilo; Girón, Sara
2005-05-01
This paper describes the preliminary results of research work in acoustics, conducted in a set of 12 Mudejar-Gothic churches in the city of Seville in the south of Spain. Despite common architectural style, the churches feature individual characteristics and have volumes ranging from 3947 to 10 708 m3. Acoustic parameters were measured in unoccupied churches according to the ISO-3382 standard. An extensive experimental study was carried out using impulse response analysis through a maximum length sequence measurement system in each church. It covered aspects such as reverberation (reverberation times, early decay times), distribution of sound levels (sound strength); early to late sound energy parameters derived from the impulse responses (center time, clarity for speech, clarity, definition, lateral energy fraction), and speech intelligibility (rapid speech transmission index), which all take both spectral and spatial distribution into account. Background noise was also measured to obtain the NR indices. The study describes the acoustic field inside each temple and establishes a discussion for each one of the acoustic descriptors mentioned by using the theoretical models available and the principles of architectural acoustics. Analysis of the quality of the spaces for music and speech is carried out according to the most widespread criteria for auditoria. .
Acoustic analysis in Mudejar-Gothic churches: experimental results.
Galindo, Miguel; Zamarreño, Teófilo; Girón, Sara
2005-05-01
This paper describes the preliminary results of research work in acoustics, conducted in a set of 12 Mudejar-Gothic churches in the city of Seville in the south of Spain. Despite common architectural style, the churches feature individual characteristics and have volumes ranging from 3947 to 10 708 m3. Acoustic parameters were measured in unoccupied churches according to the ISO-3382 standard. An extensive experimental study was carried out using impulse response analysis through a maximum length sequence measurement system in each church. It covered aspects such as reverberation (reverberation times, early decay times), distribution of sound levels (sound strength); early to late sound energy parameters derived from the impulse responses (center time, clarity for speech, clarity, definition, lateral energy fraction), and speech intelligibility (rapid speech transmission index), which all take both spectral and spatial distribution into account. Background noise was also measured to obtain the NR indices. The study describes the acoustic field inside each temple and establishes a discussion for each one of the acoustic descriptors mentioned by using the theoretical models available and the principles of architectural acoustics. Analysis of the quality of the spaces for music and speech is carried out according to the most widespread criteria for auditoria.
The influence of musical experience on lateralisation of auditory processing.
Spajdel, Marián; Jariabková, Katarína; Riecanský, Igor
2007-11-01
The influence of musical experience on free-recall dichotic listening to environmental sounds, two-tone sequences, and consonant-vowel (CV) syllables was investigated. A total of 60 healthy right-handed participants were divided into two groups according to their active musical competence ("musicians" and "non-musicians"). In both groups, we found a left ear advantage (LEA) for nonverbal stimuli (environmental sounds and two-tone sequences) and a right ear advantage (REA) for CV syllables. Dichotic listening to environmental sounds was uninfluenced by musical experience. The total accuracy of recall for two-tone sequences was higher in musicians than in non-musicians but the lateralisation was similar in both groups. For CV syllables a lower REA was found in male but not female musicians in comparison to non-musicians. The results indicate a specific sex-dependent effect of musical experience on lateralisation of phonological auditory processing.
Monkeys Match and Tally Quantities across Senses
ERIC Educational Resources Information Center
Jordan, Kerry E.; MacLean, Evan L.; Brannon, Elizabeth M.
2008-01-01
We report here that monkeys can actively match the number of sounds they hear to the number of shapes they see and present the first evidence that monkeys sum over sounds and sights. In Experiment 1, two monkeys were trained to choose a simultaneous array of 1-9 squares that numerically matched a sample sequence of shapes or sounds. Monkeys…
Gordon, Shira D; Ter Hofstede, Hannah M
2018-03-22
Animals co-occur with multiple predators, making sensory systems that can encode information about diverse predators advantageous. Moths in the families Noctuidae and Erebidae have ears with two auditory receptor cells (A1 and A2) used to detect the echolocation calls of predatory bats. Bat communities contain species that vary in echolocation call duration, and the dynamic range of A1 is limited by the duration of sound, suggesting that A1 provides less information about bats with shorter echolocation calls. To test this hypothesis, we obtained intensity-response functions for both receptor cells across many moth species for sound pulse durations representing the range of echolocation call durations produced by bat species in northeastern North America. We found that the threshold and dynamic range of both cells varied with sound pulse duration. The number of A1 action potentials per sound pulse increases linearly with increasing amplitude for long-duration pulses, saturating near the A2 threshold. For short sound pulses, however, A1 saturates with only a few action potentials per pulse at amplitudes far lower than the A2 threshold for both single sound pulses and pulse sequences typical of searching or approaching bats. Neural adaptation was only evident in response to approaching bat sequences at high amplitudes, not search-phase sequences. These results show that, for short echolocation calls, a large range of sound levels cannot be coded by moth auditory receptor activity, resulting in no information about the distance of a bat, although differences in activity between ears might provide information about direction. © 2018. Published by The Company of Biologists Ltd.
An acoustic survey of beaked whales at Cross Seamount near Hawaii.
McDonald, Mark A; Hildebrand, John A; Wiggins, Sean M; Johnston, David W; Polovina, Jeffrey J
2009-02-01
An acoustic record from Cross Seamount, southwest of Hawaii, revealed sounds characteristic of beaked whale echolocation at the same relative abundance year-around (270 of 356 days), occurring almost entirely at night. The most common sound had a linear frequency upsweep from 35 to 100 kHz (the bandwidth of recording), an interpulse interval of 0.11 s, and duration of at least 932 mus. A less common upsweep sound with shorter interpulse interval and slower sweep rate was also present. Sounds matching Cuvier's beaked whale were not detected, and Blainville's beaked whale sounds were detected on only one occasion.
Wang, Qingcui; Bao, Ming; Chen, Lihan
2014-01-01
Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.
Correlation between Identification Accuracy and Response Confidence for Common Environmental Sounds
set of environmental sounds with stimulus control and precision. The present study is one in a series of efforts to provide a baseline evaluation of a...sounds from six broad categories: household items, alarms, animals, human generated, mechanical, and vehicle sounds. Each sound was presented five times
Congenital amusia: a short-term memory deficit for non-verbal, but not verbal sounds.
Tillmann, Barbara; Schulze, Katrin; Foxton, Jessica M
2009-12-01
Congenital amusia refers to a lifelong disorder of music processing and is linked to pitch-processing deficits. The present study investigated congenital amusics' short-term memory for tones, musical timbres and words. Sequences of five events (tones, timbres or words) were presented in pairs and participants had to indicate whether the sequences were the same or different. The performance of congenital amusics confirmed a memory deficit for tone sequences, but showed normal performance for word sequences. For timbre sequences, amusics' memory performance was impaired in comparison to matched controls. Overall timbre performance was found to be correlated with melodic contour processing (as assessed by the Montreal Battery of Evaluation of Amusia). The present findings show that amusics' deficits extend to non-verbal sound material other than pitch, in this case timbre, while not affecting memory for verbal material. This is in line with previous suggestions about the domain-specificity of congenital amusia.
Do humans and nonhuman animals share the grouping principles of the Iambic - Trochaic Law?
de la Mora, Daniela M.; Nespor, Marina; Toro, Juan M.
2014-01-01
The Iambic-Trochaic Law describes humans’ tendency to form trochaic groups over sequences varying in pitch or intensity (i.e., the loudest or highest sound marks group beginnings), and iambic groups over sequences varying in duration (i.e., the longest sound marks group endings). The extent to which these perceptual biases are shared by humans and nonhuman animals is yet unclear. In Experiment 1, we trained rats to discriminate pitch-alternating sequences of tones from sequences randomly varying in pitch. In Experiment 2, rats were trained to discriminate duration-alternating sequences of tones from sequences randomly varying in duration. We found that nonhuman animals group as trochees sequences based on pitch variations, but they do not group as iambs sequences varying in duration. Importantly, humans grouped the same stimuli following the principles of the Iambic-Trochaic Law (Experiment 3). These results suggest an early emergence of the trochaic rhythmic grouping bias based on pitch, possibly relying on perceptual abilities shared by humans and other mammals as well, whereas the iambic rhythmic grouping bias based on duration might depend on language experience. PMID:22956287
Do humans and nonhuman animals share the grouping principles of the iambic-trochaic law?
de la Mora, Daniela M; Nespor, Marina; Toro, Juan M
2013-01-01
The iambic-trochaic law describes humans' tendency to form trochaic groups over sequences varying in pitch or intensity (i.e., the loudest or highest sounds mark group beginnings), and iambic groups over sequences varying in duration (i.e., the longest sounds mark group endings). The extent to which these perceptual biases are shared by humans and nonhuman animals is yet unclear. In Experiment 1, we trained rats to discriminate pitch-alternating sequences of tones from sequences randomly varying in pitch. In Experiment 2, rats were trained to discriminate duration-alternating sequences of tones from sequences randomly varying in duration. We found that nonhuman animals group sequences based on pitch variations as trochees, but they do not group sequences varying in duration as iambs. Importantly, humans grouped the same stimuli following the principles of the iambic-trochaic law (Exp. 3). These results suggest the early emergence of the trochaic rhythmic grouping bias based on pitch, possibly relying on perceptual abilities shared by humans and other mammals, whereas the iambic rhythmic grouping bias based on duration might depend on language experience.
Wheeler, Alyssa R.; Fulton, Kara A.; Gaudette, Jason E.; Simmons, Ryan A.; Matsuo, Ikuo; Simmons, James A.
2016-01-01
Big brown bats (Eptesicus fuscus) emit trains of brief, wideband frequency-modulated (FM) echolocation sounds and use echoes of these sounds to orient, find insects, and guide flight through vegetation. They are observed to emit sounds that alternate between short and long inter-pulse intervals (IPIs), forming sonar sound groups. The occurrence of these strobe groups has been linked to flight in cluttered acoustic environments, but how exactly bats use sonar sound groups to orient and navigate is still a mystery. Here, the production of sound groups during clutter navigation was examined. Controlled flight experiments were conducted where the proximity of the nearest obstacles was systematically decreased while the extended scene was kept constant. Four bats flew along a corridor of varying widths (100, 70, and 40 cm) bounded by rows of vertically hanging plastic chains while in-flight echolocation calls were recorded. Bats shortened their IPIs for more rapid spatial sampling and also grouped their sounds more tightly when flying in narrower corridors. Bats emitted echolocation calls with progressively shorter IPIs over the course of a flight, and began their flights by emitting shorter starting IPI calls when clutter was denser. The percentage of sound groups containing 3 or more calls increased with increasing clutter proximity. Moreover, IPI sequences having internal structure become more pronounced when corridor width narrows. A novel metric for analyzing the temporal organization of sound sequences was developed, and the results indicate that the time interval between echolocation calls depends heavily on the preceding time interval. The occurrence of specific IPI patterns were dependent upon clutter, which suggests that sonar sound grouping may be an adaptive strategy for coping with pulse-echo ambiguity in cluttered surroundings. PMID:27445723
Characteristics of gunshot sound displays by North Atlantic right whales in the Bay of Fundy.
Parks, Susan E; Hotchkin, Cara F; Cortopassi, Kathryn A; Clark, Christopher W
2012-04-01
North Atlantic right whales (Eubalaena glacialis) produce a loud, broadband signal referred to as the gunshot sound. These distinctive sounds may be suitable for passive acoustic monitoring and detection of right whales; however, little is known about the prevalence of these sounds in important right whale habitats, such as the Bay of Fundy. This study investigates the timing and distribution of gunshot sound production on the summer feeding grounds using an array of five marine acoustic recording units deployed in the Bay of Fundy, Canada in mid-summer 2004 and 2005. Gunshot sounds were common, detected on 37 of 38 recording days. Stereotyped gunshot bouts averaged 1.5 h, with some bouts exceeding 7 h in duration with up to seven individuals producing gunshots at any one time. Bouts were more commonly detected in the late afternoon and evening than during the morning hours. Locations of gunshots in bouts indicated that whales producing the sounds were either stationary or showed directional travel, suggesting gunshots have different communication functions depending on behavioral context. These results indicate that gunshots are a common right whale sound produced during the summer months and are an important component in the acoustic communication system of this endangered species.
First-impression bias effects on mismatch negativity to auditory spatial deviants.
Fitzgerald, Kaitlin; Provost, Alexander; Todd, Juanita
2018-04-01
Internal models of regularities in the world serve to facilitate perception as redundant input can be predicted and neural resources conserved for that which is new or unexpected. In the auditory system, this is reflected in an evoked potential component known as mismatch negativity (MMN). MMN is elicited by the violation of an established regularity to signal the inaccuracy of the current model and direct resources to the unexpected event. Prevailing accounts suggest that MMN amplitude will increase with stability in regularity; however, observations of first-impression bias contradict stability effects. If tones rotate probabilities as a rare deviant (p = .125) and common standard (p = .875), MMN elicited to the initial deviant tone reaches maximal amplitude faster than MMN to the first standard when later encountered as deviant-a differential pattern that persists throughout rotations. Sensory inference is therefore biased by longer-term contextual information beyond local probability statistics. Using the same multicontext sequence structure, we examined whether this bias generalizes to MMN elicited by spatial sound cues using monaural sounds (n = 19, right first deviant and n = 22, left first deviant) and binaural sounds (n = 19, right first deviant). The characteristic differential modulation of MMN to the two tones was observed in two of three groups, providing partial support for the generalization of first-impression bias to spatially deviant sounds. We discuss possible explanations for its absence when the initial deviant was delivered monaurally to the right ear. © 2017 Society for Psychophysiological Research.
Logics for Coalgebras of Finitary Set Functors
NASA Astrophysics Data System (ADS)
Sprunger, David
In this thesis, we present a collection of results about coalgebras of finitary Set functors. Our chief contribution is a logic for behavioral equivalence for states in these coalgebras. This proof system is intended to formalize a common pattern of reasoning in the study of coalgebra commonly called proof by bisimulation or bisimulation up-to. The approach in this thesis combine these up-to techniques with a concept very close to bisimulation to show the proof system is sound and complete with respect to behavioral equivalence. Our second category of contributions revolves around applications of coalgebra to the study of sequences and power series. The culmination of this work is a new approach to Christol's Theorem, a classic result characterizing the algebraic power series in finite characteristic rings as those whose coefficients can be produced by finite automata.
Delogu, Franco; Lilla, Christopher C
2017-11-01
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.
NASA Astrophysics Data System (ADS)
Cowan, James
This chapter summarizes and explains key concepts of building acoustics. These issues include the behavior of sound waves in rooms, the most commonly used rating systems for sound and sound control in buildings, the most common noise sources found in buildings, practical noise control methods for these sources, and the specific topic of office acoustics. Common noise issues for multi-dwelling units can be derived from most of the sections of this chapter. Books can be and have been written on each of these topics, so the purpose of this chapter is to summarize this information and provide appropriate resources for further exploration of each topic.
Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.
Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J
2016-03-01
Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.
Edwards, Jan; Beckman, Mary E.
2009-01-01
While broad-focus comparisons of consonant inventories across children acquiring different language can suggest that phonological development follows a universal sequence, finer-grained statistical comparisons can reveal systematic differences. This cross-linguistic study of word-initial lingual obstruents examined some effects of language-specific frequencies on consonant mastery. Repetitions of real words were elicited from 2- and 3-year-old children who were monolingual speakers of English, Cantonese, Greek, or Japanese. The repetitions were recorded and transcribed by an adult native speaker for each language. Results found support for both language-universal effects in phonological acquisition and for language-specific influences related to phoneme and phoneme sequence frequency. These results suggest that acquisition patterns that are common across languages arise in two ways. One influence is direct, via the universal constraints imposed by the physiology and physics of speech production and perception, and how these predict which contrasts will be easy and which will be difficult for the child to learn to control. The other influence is indirect, via the way universal principles of ease of perception and production tend to influence the lexicons of many languages through commonly attested sound changes. PMID:19890438
NASA Astrophysics Data System (ADS)
Gelikonov, V. M.; Romashov, V. N.; Shabanov, D. V.; Ksenofontov, S. Yu.; Terpelov, D. A.; Shilyagin, P. A.; Gelikonov, G. V.; Vitkin, I. A.
2018-05-01
We consider a cross-polarization optical coherence tomography system with a common path for the sounding and reference waves and active maintenance of the circular polarization of a sounding wave. The system is based on the formation of birefringent characteristics of the total optical path, which are equivalent to a quarter-wave plate with a 45° orientation of its optical axes with respect to the linearly polarized reference wave. Conditions under which any light-polarization state can be obtained using a two-element phase controller are obtained. The dependence of the local cross-scattering coefficient of light in a model medium and biological tissue on the sounding-wave polarization state is demonstrated. The necessity of active maintenance of the circular polarization of a sounding wave in this common path system (including a flexible probe) is shown to realize uniform optimal conditions for cross-polarization studies of biological tissue.
Soundtrack contents and depicted sexual violence.
Pfaus, J G; Myronuk, L D; Jacobs, W J
1986-06-01
Male undergraduates were exposed to a videotaped depiction of heterosexual rape accompanied by one of three soundtracks: the original soundtrack (featuring dialogue and background rock music), relaxing music, or no sound. Subjective reports of sexual arousal, general enjoyment, perceived erotic content, and perceived pornographic content of the sequence were then provided by each subject. Results indicated that males exposed to the videotape accompanied by the original soundtrack found the sequence significantly more pornographic than males exposed to the sequence accompanied by either relaxing background music or no sound. Ratings of sexual arousal, general enjoyment, and the perceived erotic content, however, did not differ significantly across soundtrack conditions. These results are compatible with the assertion that the content of a video soundtrack may influence the impact of depicted sexual violence.
Auditory Attentional Capture: Effects of Singleton Distractor Sounds
ERIC Educational Resources Information Center
Dalton, Polly; Lavie, Nilli
2004-01-01
The phenomenon of attentional capture by a unique yet irrelevant singleton distractor has typically been studied in visual search. In this article, the authors examine whether a similar phenomenon occurs in the auditory domain. Participants searched sequences of sounds for targets defined by frequency, intensity, or duration. The presence of a…
Phonological Treatment Efficacy and Developmental Norms.
ERIC Educational Resources Information Center
Gierut, Judith A.; And Others
1996-01-01
Two studies, one within subjects and the other across subjects, evaluated the efficacy of teaching sounds in developmental sequence to nine young children (ages three to five). Treatment of later-acquired phonemes led to systemwide changes in untreated sound classes, whereas treatment of early-acquired phonemes did not. Findings suggest…
Murray Gibson
2017-12-09
Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain â a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).
Music and language perception: expectations, structural integration, and cognitive sequencing.
Tillmann, Barbara
2012-10-01
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes. Copyright © 2012 Cognitive Science Society, Inc.
Sensory illusions: Common mistakes in physics regarding sound, light and radio waves
NASA Astrophysics Data System (ADS)
Briles, T. M.; Tabor-Morris, A. E.
2013-03-01
Optical illusions are well known as effects that we see that are not representative of reality. Sensory illusions are similar but can involve other senses than sight, such as hearing or touch. One mistake commonly noted among instructors is that students often mis-identify radio signals as sound waves and not as part of the electromagnetic spectrum. A survey of physics students from multiple high schools highlights the frequency of this common misconception, as well as other nuances on this misunderstanding. Many students appear to conclude that, since they experience radio broadcasts as sound, then sound waves are the actual transmission of radio signals and not, as is actually true, a representation of those waves as produced by the translator box, the radio. Steps to help students identify and correct sensory illusion misconceptions are discussed. School of Education
Halos, D.; Hart, S.A.; Hershberger, P.; Kocan, R.
2005-01-01
In vitro explant cultures identified Ichthyophonus in 10.9% of 302 Puget Sound rockfish Sebastes emphaeus sampled from five sites in the San Juan Islands archipelago and Puget Sound, Washington, in 2003. None of the infected fish exhibited visible lesions and only a single fish was histologically positive. Significantly more females were infected (12.4%) than males (6.8%), and while infected males were only detected at two of the five sites, infected females were identified at all sites, with no significant differences in infection prevalence. Genomic sequences of Ichthyophonus isolates obtained from Puget Sound rockfish, Pacific herring Clupea pallasii, and Yukon River Chinook salmon Oncorhynchus tshawytscha were identical in both the A and B regions of the small subunit 18S ribosomal DNA but were different from Ichthyophonus sequences previously isolated from four different species of rockfish from the northeastern Pacific Ocean. Ichthyophonus in Puget Sound rockfish may not have been previously detected because the infection is subclinical in this species and earlier investigators did not utilize in vitro techniques for diagnosis of ichthyophoniasis. However, since clinical ichthyophoniasis has recently been identified in several other species of northeast Pacific rockfishes, it is hypothesized that this either is an emerging disease resulting from changing marine conditions or the result of introduction by infected southern species that appear during periodic El Nin??o events. ?? Copyright by the American Fisheries Society 2005.
Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams.
Centanni, Tracy Michelle; Booker, Anne B; Chen, Fuyi; Sloan, Andrew M; Carraway, Ryan S; Rennaker, Robert L; LoTurco, Joseph J; Kilgard, Michael P
2016-04-27
Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population. Copyright © 2016 the authors 0270-6474/16/364895-12$15.00/0.
Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams
Booker, Anne B.; Chen, Fuyi; Sloan, Andrew M.; Carraway, Ryan S.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.
2016-01-01
Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC−) before any behavioral training. A separate group of 8 rats (3 DC−) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. SIGNIFICANCE STATEMENT Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population. PMID:27122044
Elfving, Lars; Helkimo, Martti; Magnusson, Tomas
2002-01-01
Temporomandibular joint (TMJ) sounds are very common among patients with temporomandibular disorders (TMD), but also in non-patient populations. A variety of different causes to TMJ-sounds have been suggested e.g. arthrotic changes in the TMJs, anatomical variations, muscular incoordination and disc displacement. In the present investigation, the prevalence and type of different joint sounds were registered in 125 consecutive patients with suspected TMD and in 125 matched controls. Some kind of joint sound was recorded in 56% of the TMD patients and in 36% of the controls. The awareness of joint sounds was higher among TMD patients as compared to controls (88% and 60% respectively). The most common sound recorded in both groups was reciprocal clickings indicative of a disc displacement, while not one single case fulfilling the criteria for clicking due to a muscular incoordination was found. In the TMD group women with disc displacement reported sleeping on the stomach significantly more often than women without disc displacement did. An increased general joint laxity was found in 39% of the TMD patients with disc displacement, while this was found in only 9% of the patients with disc displacement in the control group. To conclude, disc displacement is probably the most common cause to TMJ sounds, while the existence of TMJ sounds due to a muscular incoordination can be questioned. Furthermore, sleeping on the stomach might be associated with disc displacement, while general joint laxity is probably not a causative factor, but a seeking care factor in patients with disc displacement.
Silence and the Notion of the Commons.
ERIC Educational Resources Information Center
Franklin, Ursula
1994-01-01
Stresses the value of silence, the right to have silence, and how technology has manipulated the sound environment and therefore taken silence out of common availability. Discusses noise pollution and the manipulative use of sound for private gain. Suggests taking action to restore the right to silence. (LP)
Noise-robust speech recognition through auditory feature detection and spike sequence decoding.
Schafer, Phillip B; Jin, Dezhe Z
2014-03-01
Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
Software-Based Scoring and Sound Design: An Introductory Guide for Music Technology Instruction
ERIC Educational Resources Information Center
Walzer, Daniel A.
2016-01-01
This article explores the creative function of virtual instruments, sequencers, loops, and software-based synthesizers to introduce basic scoring and sound design concepts for visual media in an introductory music technology course. Using digital audio workstations with user-focused and configurable options, novice composers can hone a broad range…
Discovering Structure in Auditory Input: Evidence from Williams Syndrome
ERIC Educational Resources Information Center
Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette
2010-01-01
We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…
Event-Related Potentials Index Segmentation of Nonsense Sounds
ERIC Educational Resources Information Center
Sanders, Lisa D.; Ameral, Victoria; Sayles, Kathryn
2009-01-01
To understand the world around us, continuous streams of information including speech must be segmented into units that can be mapped onto stored representations. Recent evidence has shown that event-related potentials (ERPs) can index the online segmentation of sound streams. In the current study, listeners were trained to recognize sequences of…
Frequency-Shift Detectors Bind Binaural as Well as Monaural Frequency Representations
ERIC Educational Resources Information Center
Carcagno, Samuele; Semal, Catherine; Demany, Laurent
2011-01-01
Previous psychophysical work provided evidence for the existence of automatic frequency-shift detectors (FSDs) that establish perceptual links between successive sounds. In this study, we investigated the characteristics of the FSDs with respect to the binaural system. Listeners were presented with sound sequences consisting of a chord of pure…
Automated segmentation of linear time-frequency representations of marine-mammal sounds.
Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I
2013-09-01
Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.
ERIC Educational Resources Information Center
Giordano, Bruno L.; McDonnell, John; McAdams, Stephen
2010-01-01
The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental…
Infant auditory short-term memory for non-linguistic sounds.
Ross-Sheehy, Shannon; Newman, Rochelle S
2015-04-01
This research explores auditory short-term memory (STM) capacity for non-linguistic sounds in 10-month-old infants. Infants were presented with auditory streams composed of repeating sequences of either 2 or 4 unique instruments (e.g., flute, piano, cello; 350 or 700 ms in duration) followed by a 500-ms retention interval. These instrument sequences either stayed the same for every repetition (Constant) or changed by 1 instrument per sequence (Varying). Using the head-turn preference procedure, infant listening durations were recorded for each stream type (2- or 4-instrument sequences composed of 350- or 700-ms notes). Preference for the Varying stream was taken as evidence of auditory STM because detection of the novel instrument required memory for all of the instruments in a given sequence. Results demonstrate that infants listened longer to Varying streams for 2-instrument sequences, but not 4-instrument sequences, composed of 350-ms notes (Experiment 1), although this effect did not hold when note durations were increased to 700 ms (Experiment 2). Experiment 3 replicates and extends results from Experiments 1 and 2 and provides support for a duration account of capacity limits in infant auditory STM. Copyright © 2014 Elsevier Inc. All rights reserved.
Analysis of High Temporal and Spatial Observations of Hurricane Joaquin During TCI-15
NASA Technical Reports Server (NTRS)
Creasey, Robert; Elsberry, Russell L.; Velden, Chris; Cecil, Daniel J.; Bell, Michael; Hendricks, Eric A.
2016-01-01
Objectives: Provide an example of why analysis of high density soundings across Hurricane Joaquin also require highly accurate center positions; Describe technique for calculating 3-D zero-wind center positions from the highly accurate GPS positions of sequences of High-Density Sounding System (HDSS) soundings as they fall from 10 km to the ocean surface; Illustrate the vertical tilt of the vortex above 4-5 km during two center passes through Hurricane Joaquin on 4 October 2015.
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
Auditory-Motor Processing of Speech Sounds
Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.
2013-01-01
The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846
Multistability in auditory stream segregation: a predictive coding view
Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra
2012-01-01
Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621
ERIC Educational Resources Information Center
Yu, Yue; Kushnir, Tamar
2016-01-01
This study explores the role of a particular social cue--the "sequence" of demonstrated actions and events--in preschooler's categorization. A demonstrator sorted objects that varied on both a surface feature (color) and a nonobvious property (sound made when shaken). Children saw a sequence of actions in which the nonobvious property…
A Series of Case Studies of Tinnitus Suppression With Mixed Background Stimuli in a Cochlear Implant
Keiner, A. J.; Walker, Kurt; Deshpande, Aniruddha K.; Witt, Shelley; Killian, Matthijs; Ji, Helena; Patrick, Jim; Dillier, Norbert; van Dijk, Pim; Lai, Wai Kong; Hansen, Marlan R.; Gantz, Bruce
2015-01-01
Purpose Background sounds provided by a wearable sound playback device were mixed with the acoustical input picked up by a cochlear implant speech processor in an attempt to suppress tinnitus. Method First, patients were allowed to listen to several sounds and to select up to 4 sounds that they thought might be effective. These stimuli were programmed to loop continuously in the wearable playback device. Second, subjects were instructed to use 1 background sound each day on the wearable device, and they sequenced the selected background sounds during a 28-day trial. Patients were instructed to go to a website at the end of each day and rate the loudness and annoyance of the tinnitus as well as the acceptability of the background sound. Patients completed the Tinnitus Primary Function Questionnaire (Tyler, Stocking, Secor, & Slattery, 2014) at the beginning of the trial. Results Results indicated that background sounds were very effective at suppressing tinnitus. There was considerable variability in sounds preferred by the subjects. Conclusion The study shows that a background sound mixed with the microphone input can be effective for suppressing tinnitus during daily use of the sound processor in selected cochlear implant users. PMID:26001407
Application of acoustic radiosity methods to noise propagation within buildings
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2005-09-01
The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.
ERIC Educational Resources Information Center
La Brecque, Richard
This paper clarifies core concepts in a Kentucky judge's decision that the State General Assembly has failed to provide an efficient system of common schools. Connecting "efficiency" of educational systems to "equality of educational opportunity," the paper argues that the realization of a constitutionally sound, efficient…
Know thy sound: perceiving self and others in musical contexts.
Sevdalis, Vassilis; Keller, Peter E
2014-10-01
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory-motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual-motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice. Copyright © 2014 Elsevier B.V. All rights reserved.
Malinina, E S
2014-01-01
The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.
When Does Between-Sequence Phonological Similarity Promote Irrelevant Sound Disruption?
ERIC Educational Resources Information Center
Marsh, John E.; Vachon, Francois; Jones, Dylan M.
2008-01-01
Typically, the phonological similarity between to-be-recalled items and TBI auditory stimuli has no impact if recall in serial order is required. However, in the present study, the authors have shown that the free recall, but not serial recall, of lists of phonologically related to-be-remembered items was disrupted by an irrelevant sound stream…
The use of an intraoral electrolarynx for an edentulous patient: a clinical report.
Wee, Alvin G; Wee, Lisa A; Cheng, Ansgar C; Cwynar, Roger B
2004-06-01
This clinical report describes the clinical requirements, treatment sequence, and use of a relatively new intraoral electrolarynx for a completely edentulous patient. This device consists of a sound source attached to the maxilla and a hand-held controller unit that controls the pitch and volume of the intraoral sound source via transmitted radio waves.
Cavusoglu, M; Ciloglu, T; Serinagaoglu, Y; Kamasak, M; Erogul, O; Akcam, T
2008-08-01
In this paper, 'snore regularity' is studied in terms of the variations of snoring sound episode durations, separations and average powers in simple snorers and in obstructive sleep apnoea (OSA) patients. The goal was to explore the possibility of distinguishing among simple snorers and OSA patients using only sleep sound recordings of individuals and to ultimately eliminate the need for spending a whole night in the clinic for polysomnographic recording. Sequences that contain snoring episode durations (SED), snoring episode separations (SES) and average snoring episode powers (SEP) were constructed from snoring sound recordings of 30 individuals (18 simple snorers and 12 OSA patients) who were also under polysomnographic recording in Gülhane Military Medical Academy Sleep Studies Laboratory (GMMA-SSL), Ankara, Turkey. Snore regularity is quantified in terms of mean, standard deviation and coefficient of variation values for the SED, SES and SEP sequences. In all three of these sequences, OSA patients' data displayed a higher variation than those of simple snorers. To exclude the effects of slow variations in the base-line of these sequences, new sequences that contain the coefficient of variation of the sample values in a 'short' signal frame, i.e., short time coefficient of variation (STCV) sequences, were defined. The mean, the standard deviation and the coefficient of variation values calculated from the STCV sequences displayed a stronger potential to distinguish among simple snorers and OSA patients than those obtained from the SED, SES and SEP sequences themselves. Spider charts were used to jointly visualize the three parameters, i.e., the mean, the standard deviation and the coefficient of variation values of the SED, SES and SEP sequences, and the corresponding STCV sequences as two-dimensional plots. Our observations showed that the statistical parameters obtained from the SED and SES sequences, and the corresponding STCV sequences, possessed a strong potential to distinguish among simple snorers and OSA patients, both marginally, i.e., when the parameters are examined individually, and jointly. The parameters obtained from the SEP sequences and the corresponding STCV sequences, on the other hand, did not have a strong discrimination capability. However, the joint behaviour of these parameters showed some potential to distinguish among simple snorers and OSA patients.
Radford, Craig A; Ghazali, Shahriman M; Montgomery, John C; Jeffs, Andrew G
2016-01-01
Fish vocalisation is often a major component of underwater soundscapes. Therefore, interpretation of these soundscapes requires an understanding of the vocalisation characteristics of common soniferous fish species. This study of captive female bluefin gurnard, Chelidonichthys kumu, aims to formally characterise their vocalisation sounds and daily pattern of sound production. Four types of sound were produced and characterised, twice as many as previously reported in this species. These sounds fit two aural categories; grunt and growl, the mean peak frequencies for which ranged between 129 to 215 Hz. This species vocalized throughout the 24 hour period at an average rate of (18.5 ± 2.0 sounds fish-1 h-1) with an increase in vocalization rate at dawn and dusk. Competitive feeding did not elevate vocalisation as has been found in other gurnard species. Bluefin gurnard are common in coastal waters of New Zealand, Australia and Japan and, given their vocalization rate, are likely to be significant contributors to ambient underwater soundscape in these areas.
Radford, Craig A.; Ghazali, Shahriman M.; Montgomery, John C.; Jeffs, Andrew G.
2016-01-01
Fish vocalisation is often a major component of underwater soundscapes. Therefore, interpretation of these soundscapes requires an understanding of the vocalisation characteristics of common soniferous fish species. This study of captive female bluefin gurnard, Chelidonichthys kumu, aims to formally characterise their vocalisation sounds and daily pattern of sound production. Four types of sound were produced and characterised, twice as many as previously reported in this species. These sounds fit two aural categories; grunt and growl, the mean peak frequencies for which ranged between 129 to 215 Hz. This species vocalized throughout the 24 hour period at an average rate of (18.5 ± 2.0 sounds fish-1 h-1) with an increase in vocalization rate at dawn and dusk. Competitive feeding did not elevate vocalisation as has been found in other gurnard species. Bluefin gurnard are common in coastal waters of New Zealand, Australia and Japan and, given their vocalization rate, are likely to be significant contributors to ambient underwater soundscape in these areas. PMID:26890124
Martorana, Davide; Bonatti, Francesco; Mozzoni, Paola; Vaglio, Augusto; Percesepe, Antonio
2017-01-01
Autoinflammatory diseases (AIDs) are a genetically heterogeneous group of diseases caused by mutations of genes encoding proteins, which play a pivotal role in the regulation of the inflammatory response. In the pathogenesis of AIDs, the role of the genetic background is triggered by environmental factors through the modulation of the innate immune system. Monogenic AIDs are characterized by Mendelian inheritance and are caused by highly penetrant genetic variants in single genes. During the last years, remarkable progress has been made in the identification of disease-associated genes by using new technologies, such as next-generation sequencing, which has allowed the genetic characterization in undiagnosed patients and in sporadic cases by means of targeted resequencing of a gene panel and whole exome sequencing. In this review, we delineate the genetics of the monogenic AIDs, report the role of the most common gene mutations, and describe the evidences of the most sound genotype/phenotype correlations in AID. PMID:28421071
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization
2018-01-01
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it. PMID:29440556
Silver, bighead, and common carp orient to acoustic particle motion when avoiding a complex sound.
Zielinski, Daniel P; Sorensen, Peter W
2017-01-01
Behavioral responses of silver carp (Hypopthalmichthys molitrix), bighead carp (H. nobilis), and common carp (Cyprinus carpio) to a complex, broadband sound were tested in the absence of visual cues to determine whether these species are negatively phonotaxic and the roles that sound pressure and particle motion might play mediating this response. In a dark featureless square enclosure, groups of 3 fish were tracked and the distance of each fish from speakers and their swimming trajectories relative to sound pressure and particle acceleration were analyzed before, and then while an outboard motor sound was played. All three species exhibited negative phonotaxis during the first two exposures after which they ceased responding. The median percent time fish spent near the active speaker for the first two trials decreased from 7.0% to 1.3% for silver carp, 7.9% to 1.1% for bighead carp, and 9.5% to 3% for common carp. Notably, when close to the active speaker fish swam away from the source and maintained a nearly perfect 0° orientation to the axes of particle acceleration. Fish did not enter sound fields greater than 140 dB (ref. 1 μPa). These results demonstrate that carp avoid complex sounds in darkness and while initial responses may be informed by sound pressure, sustained oriented avoidance behavior is likely mediated by particle motion. This understanding of how invasive carp use particle motion to guide avoidance could be used to design new acoustic deterrents to divert them in dark, turbid river waters.
Silver, bighead, and common carp orient to acoustic particle motion when avoiding a complex sound
Sorensen, Peter W.
2017-01-01
Behavioral responses of silver carp (Hypopthalmichthys molitrix), bighead carp (H. nobilis), and common carp (Cyprinus carpio) to a complex, broadband sound were tested in the absence of visual cues to determine whether these species are negatively phonotaxic and the roles that sound pressure and particle motion might play mediating this response. In a dark featureless square enclosure, groups of 3 fish were tracked and the distance of each fish from speakers and their swimming trajectories relative to sound pressure and particle acceleration were analyzed before, and then while an outboard motor sound was played. All three species exhibited negative phonotaxis during the first two exposures after which they ceased responding. The median percent time fish spent near the active speaker for the first two trials decreased from 7.0% to 1.3% for silver carp, 7.9% to 1.1% for bighead carp, and 9.5% to 3% for common carp. Notably, when close to the active speaker fish swam away from the source and maintained a nearly perfect 0° orientation to the axes of particle acceleration. Fish did not enter sound fields greater than 140 dB (ref. 1 μPa). These results demonstrate that carp avoid complex sounds in darkness and while initial responses may be informed by sound pressure, sustained oriented avoidance behavior is likely mediated by particle motion. This understanding of how invasive carp use particle motion to guide avoidance could be used to design new acoustic deterrents to divert them in dark, turbid river waters. PMID:28654676
Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah
2018-01-01
Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.
Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah
2018-01-01
Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children. PMID:29674986
Kuriki, Shinya; Kobayashi, Yusuke; Kobayashi, Takanari; Tanaka, Keita; Uchikawa, Yoshinori
2013-02-01
The auditory steady-state response (ASSR) is a weak potential or magnetic response elicited by periodic acoustic stimuli with a maximum response at about a 40-Hz periodicity. In most previous studies using amplitude-modulated (AM) tones of stimulus sound, long lasting tones of more than 10 s in length were used. However, characteristics of the ASSR elicited by short AM tones have remained unclear. In this study, we examined magnetoencephalographic (MEG) ASSR using a sequence of sinusoidal AM tones of 0.78 s in length with various tone frequencies of 440-990 Hz in about one octave variation. It was found that the amplitude of the ASSR was invariant with tone frequencies when the level of sound pressure was adjusted along an equal-loudness curve. The amplitude also did not depend on the existence of preceding tone or difference in frequency of the preceding tone. When the sound level of AM tones was changed with tone frequencies in the same range of 440-990 Hz, the amplitude of ASSR varied in a proportional manner to the sound level. These characteristics are favorable for the use of ASSR in studying temporal processing of auditory information in the auditory cortex. The lack of adaptation in the ASSR elicited by a sequence of short tones may be ascribed to the neural activity of widely accepted generator of magnetic ASSR in the primary auditory cortex. Copyright © 2012 Elsevier B.V. All rights reserved.
The Riggs Institute: What We Teach.
ERIC Educational Resources Information Center
McCulloch, Myrna
Phonetic content/handwriting instruction begins by teaching the sounds of, and letter formation for the 70 "Orton" phonograms which are the commonly-used correct spelling patterns for the 45 sounds of English speech. The purpose for teaching the sound/symbol relationship first in isolation, without key words or pictures (explicitly), is to give…
ERIC Educational Resources Information Center
MARTINI, HARRY R.
BLACK AND WHITE FILMSTRIPS THAT REPRODUCED STILL PICTURES AND SOUND TRACK FROM EDUCATIONAL TELEVISION BROADCASTS WERE USED TO STUDY THE EFFECTIVENESS OF ETV REPRODUCTIONS IN AIDING POOR ACHIEVERS. THE SPECIFIC ADVANTAGE OF SUCH A REPRODUCTION WAS THAT IT COULD BE PACED TO THE LEARNING TEMPO OF THE STUDENTS RATHER THAN USING THE TOO-FAST PACE OF A…
Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal
Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.
2015-01-01
Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037
NASA Astrophysics Data System (ADS)
Sokolov, S. Yu.; Moroz, E. A.; Abramova, A. S.; Zarayskaya, Yu. A.; Dobrolubova, K. O.
2017-07-01
On cruises 25 (2007) and 28 (2011) of the R/V Akademik Nikolai Strakhov in the northern part of the Barents Sea, the Geological Institute, Russian Academy of Sciences, conducted comprehensive research on the bottom relief and upper part of the sedimentary cover profile under the auspices of the International Polar Year program. One of the instrument components was the SeaBat 8111 shallow-water multibeam echo sounder, which can map the acoustic field similarly to a side scan sonar, which records the response both from the bottom and from the water column. In the operations area, intense sound scattering objects produced by the discharge of deep fluid flows are detected in the water column. The sound scattering objects and pockmarks in the bottom relief are related to anomalies in hydrocarbon gas concentrations in bottom sediments. The sound scattering objects are localized over Triassic sequences outcropping from the bottom. The most intense degassing processes manifest themselves near the contact of the Triassic sequences and Jurassic clay deposits, as well as over deep depressions in a field of Bouguer anomalies related to the basement of the Jurassic-Cretaceous rift system
Recent paleoseismicity record in Prince William Sound, Alaska, USA
NASA Astrophysics Data System (ADS)
Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.
2017-12-01
Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.
Static hand gesture recognition from a video
NASA Astrophysics Data System (ADS)
Rokade, Rajeshree S.; Doye, Dharmpal
2011-10-01
A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.
3D simulation of an audible ultrasonic electrolarynx using difference waves.
Mills, Patrick; Zara, Jason
2014-01-01
A total laryngectomy removes the vocal folds which are fundamental in forming voiced sounds that make speech possible. Although implanted prosthetics are commonly used in developed countries, simple handheld vibrating electrolarynxes are still common worldwide. These devices are easy to use but suffer from many drawbacks including dedication of a hand, mechanical sounding voice, and sound leakage. To address some of these drawbacks, we introduce a novel electrolarynx that uses vibro-acoustic interference of dual ultrasonic waves to generate an audible fundamental frequency. A 3D simulation of the principles of the device is presented in this paper.
Common humpback whale (Megaptera novaeangliae) sound types for passive acoustic monitoring.
Stimpert, Alison K; Au, Whitlow W L; Parks, Susan E; Hurst, Thomas; Wiley, David N
2011-01-01
Humpback whales (Megaptera novaeangliae) are one of several baleen whale species in the Northwest Atlantic that coexist with vessel traffic and anthropogenic noise. Passive acoustic monitoring strategies can be used in conservation management, but the first step toward understanding the acoustic behavior of a species is a good description of its acoustic repertoire. Digital acoustic tags (DTAGs) were placed on humpback whales in the Stellwagen Bank National Marine Sanctuary to record and describe the non-song sounds being produced in conjunction with foraging activities. Peak frequencies of sounds were generally less than 1 kHz, but ranged as high as 6 kHz, and sounds were generally less than 1 s in duration. Cluster analysis distilled the dataset into eight groups of sounds with similar acoustic properties. The two most stereotyped and distinctive types ("wops" and "grunts") were also identified aurally as candidates for use in passive acoustic monitoring. This identification of two of the most common sound types will be useful for moving forward conservation efforts on this Northwest Atlantic feeding ground.
Hear it, See it, Explore it: Visualizations and Sonifications of Seismic Signals
NASA Astrophysics Data System (ADS)
Fisher, M.; Peng, Z.; Simpson, D. W.; Kilb, D. L.
2010-12-01
Sonification of seismic data is an innovative way to represent seismic data in the audible range (Simpson, 2005). Seismic waves with different frequency and temporal characteristics, such as those from teleseismic earthquakes, deep “non-volcanic” tremor and local earthquakes, can be easily discriminated when time-compressed to the audio range. Hence, sonification is particularly useful for presenting complicated seismic signals with multiple sources, such as aftershocks within the coda of large earthquakes, and remote triggering of earthquakes and tremor by large teleseismic earthquakes. Previous studies mostly focused on converting the seismic data into audible files by simple time compression or frequency modulation (Simpson et al., 2009). Here we generate animations of the seismic data together with the sounds. We first read seismic data in the SAC format into Matlab, and generate a sequence of image files and an associated WAV sound file. Next, we use a third party video editor, such as the QuickTime Pro, to combine the image sequences and the sound file into an animation. We have applied this simple procedure to generate animations of remotely triggered earthquakes, tremor and low-frequency earthquakes in California, and mainshock-aftershock sequences in Japan and California. These animations clearly demonstrate the interactions of earthquake sequences and the richness of the seismic data. The tool developed in this study can be easily adapted for use in other research applications and to create sonification/animation of seismic data for education and outreach purpose.
Chemotherapy as language: sound symbolism in cancer medication names.
Abel, Gregory A; Glinert, Lewis H
2008-04-01
The concept of sound symbolism proposes that even the tiniest sounds comprising a word may suggest the qualities of the object which that word represents. Cancer-related medication names, which are likely to be charged with emotional meaning for patients, might be expected to contain such sound-symbolic associations. We analyzed the sounds in the names of 60 frequently-used cancer-related medications, focusing on the medications' trade names as well as the names (trade or generic) commonly used in the clinic. We assessed the frequency of common voiced consonants (/b/, /d/, /g/, /v/, /z/; thought to be associated with slowness and heaviness) and voiceless consonants (/p/, /t/, /k/, /f/, /s/; thought to be associated with fastness and lightness), and compared them to what would be expected in standard American English using a reference dataset. A Fisher's exact test for independence showed the chemotherapy consonantal frequencies to be significantly different from standard English (p=0.009 for trade; p<0.001 for "common usage"). For the trade names, the majority of the voiceless consonants were significantly increased compared to standard English; this effect was more pronounced with the "common usage" names (for the group, O/E=1.62; 95% CI [1.37, 1.89]). Hormonal and targeted therapy trade names showed the greatest frequency of voiceless consonants (for the group, O/E=1.76; 95% CI [1.20, 2.49]). Our results suggest that taken together, the names of chemotherapy medications contain an increased frequency of certain sounds associated with lightness, smallness and fastness. This finding raises important questions about the possible role of the names of medications in the experiences of cancer patients and providers.
Sound spectrum of a pulsating optical discharge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grachev, G N; Smirnov, A L; Tishchenko, V N
A spectrum of sound of an optical discharge generated by a repetitively pulsed (RP) laser radiation has been investigated. The parameters of laser radiation are determined at which the spectrum of sound may contains either many lines, or the main line at the pulse repetition rate and several weaker overtones, or a single line. The spectrum of sound produced by trains of RP radiation comprises the line (and overtones) at the repetition rate of train sequences and the line at the repetition rate of pulses in trains. A CO{sub 2} laser with the pulse repetition rate of f ≈ 3more » – 180 kHz and the average power of up to 2 W was used in the experiments. (optical discharges)« less
Information entropy of humpback whale songs.
Suzuki, Ryuji; Buck, John R; Tyack, Peter L
2006-03-01
The structure of humpback whale (Megaptera novaeangliae) songs was examined using information theory techniques. The song is an ordered sequence of individual sound elements separated by gaps of silence. Song samples were converted into sequences of discrete symbols by both human and automated classifiers. This paper analyzes the song structure in these symbol sequences using information entropy estimators and autocorrelation estimators. Both parametric and nonparametric entropy estimators are applied to the symbol sequences representing the songs. The results provide quantitative evidence consistent with the hierarchical structure proposed for these songs by Payne and McVay [Science 173, 587-597 (1971)]. Specifically, this analysis demonstrates that: (1) There is a strong structural constraint, or syntax, in the generation of the songs, and (2) the structural constraints exhibit periodicities with periods of 6-8 and 180-400 units. This implies that no empirical Markov model is capable of representing the songs' structure. The results are robust to the choice of either human or automated song-to-symbol classifiers. In addition, the entropy estimates indicate that the maximum amount of information that could be communicated by the sequence of sounds made is less than 1 bit per second.
Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.
Bolle, Loes J; de Jong, Christ A F; Bierman, Stijn M; van Beek, Pieter J G; van Keeken, Olvin A; Wessels, Peter W; van Damme, Cindy J G; Winter, Hendrik V; de Haan, Dick; Dekeling, René P A
2012-01-01
In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2) (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa(2)s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2)s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.
Common Sole Larvae Survive High Levels of Pile-Driving Sound in Controlled Exposure Experiments
Bolle, Loes J.; de Jong, Christ A. F.; Bierman, Stijn M.; van Beek, Pieter J. G.; van Keeken, Olvin A.; Wessels, Peter W.; van Damme, Cindy J. G.; Winter, Hendrik V.; de Haan, Dick; Dekeling, René P. A.
2012-01-01
In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa2 (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised. PMID:22431996
Reilly, Kevin J.; Spencer, Kristie A.
2013-01-01
The current study investigated the processes responsible for selection of sounds and syllables during production of speech sequences in 10 adults with hypokinetic dysarthria from Parkinson’s disease, five adults with ataxic dysarthria, and 14 healthy control speakers. Speech production data from a choice reaction time task were analyzed to evaluate the effects of sequence length and practice on speech sound sequencing. Speakers produced sequences that were between one and five syllables in length over five experimental runs of 60 trials each. In contrast to the healthy speakers, speakers with hypokinetic dysarthria demonstrated exaggerated sequence length effects for both inter-syllable intervals (ISIs) and speech error rates. Conversely, speakers with ataxic dysarthria failed to demonstrate a sequence length effect on ISIs and were also the only group that did not exhibit practice-related changes in ISIs and speech error rates over the five experimental runs. The exaggerated sequence length effects in the hypokinetic speakers with Parkinson’s disease are consistent with an impairment of action selection during speech sequence production. The absent length effects observed in the speakers with ataxic dysarthria is consistent with previous findings that indicate a limited capacity to buffer speech sequences in advance of their execution. In addition, the lack of practice effects in these speakers suggests that learning-related improvements in the production rate and accuracy of speech sequences involves processing by structures of the cerebellum. Together, the current findings inform models of serial control for speech in healthy speakers and support the notion that sequencing deficits contribute to speech symptoms in speakers with hypokinetic or ataxic dysarthria. In addition, these findings indicate that speech sequencing is differentially impaired in hypokinetic and ataxic dysarthria. PMID:24137121
The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes
ERIC Educational Resources Information Center
Gygi, Brian; Shafiro, Valeriy
2011-01-01
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five…
Increased Activation in Superior Temporal Gyri as a Function of Increment in Phonetic Features
ERIC Educational Resources Information Center
Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten
2011-01-01
A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in…
Motor-Based Treatment with and without Ultrasound Feedback for Residual Speech-Sound Errors
ERIC Educational Resources Information Center
Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin
2017-01-01
Background: There is a need to develop effective interventions and to compare the efficacy of different interventions for children with residual speech-sound errors (RSSEs). Rhotics (the r-family of sounds) are frequently in error American English-speaking children with RSSEs and are commonly targeted in treatment. One treatment approach involves…
Pupils Think Sound Has Substance--Well, Sort of ...
ERIC Educational Resources Information Center
Whittaker, Andrew G.
2012-01-01
Physics is a subject where pupils hold a great number of deeply seated misconceptions. Sound is a prime example, as it requires the visualisation of a form of energy that moves imperceptibly through an invisible medium. This article outlines some of the common misconceptions that pupils hold regarding the nature of sound and how it is transmitted,…
Experiments to investigate the acoustic properties of sound propagation
NASA Astrophysics Data System (ADS)
Dagdeviren, Omur E.
2018-07-01
Propagation of sound waves is one of the fundamental concepts in physics. Some of the properties of sound propagation such as attenuation of sound intensity with increasing distance are familiar to everybody from the experiences of daily life. However, the frequency dependence of sound propagation and the effect of acoustics in confined environments are not straightforward to estimate. In this article, we propose experiments, which can be conducted in a classroom environment with commonly available devices such as smartphones and laptops to measure sound intensity level as a function of the distance between the source and the observer and frequency of the sound. Our experiments and deviations from the theoretical calculations can be used to explain basic concepts of sound propagation and acoustics to a diverse population of students.
The role of long-term familiarity and attentional maintenance in short-term memory for timbre.
Siedenburg, Kai; McAdams, Stephen
2017-04-01
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.
Optimum employment of satellite indirect soundings as numerical model input
NASA Technical Reports Server (NTRS)
Horn, L. H.; Derber, J. C.; Koehler, T. L.; Schmidt, B. D.
1981-01-01
The characteristics of satellite-derived temperature soundings that would significantly affect their use as input for numerical weather prediction models were examined. Independent evaluations of satellite soundings were emphasized to better define error characteristics. Results of a Nimbus-6 sounding study reveal an underestimation of the strength of synoptic scale troughs and ridges, and associated gradients in isobaric height and temperature fields. The most significant errors occurred near the Earth's surface and the tropopause. Soundings from the TIROS-N and NOAA-6 satellites were also evaluated. Results again showed an underestimation of upper level trough amplitudes leading to weaker thermal gradient depictions in satellite-only fields. These errors show a definite correlation to the synoptic flow patterns. In a satellite-only analysis used to initialize a numerical model forecast, it was found that these synoptically correlated errors were retained in the forecast sequence.
Quarto, Tiziana; Blasi, Giuseppe; Pallesen, Karen Johanne; Bertolino, Alessandro; Brattico, Elvira
2014-01-01
The ability to recognize emotions contained in facial expressions are affected by both affective traits and states and varies widely between individuals. While affective traits are stable in time, affective states can be regulated more rapidly by environmental stimuli, such as music, that indirectly modulate the brain state. Here, we tested whether a relaxing or irritating sound environment affects implicit processing of facial expressions. Moreover, we investigated whether and how individual traits of anxiety and emotional control interact with this process. 32 healthy subjects performed an implicit emotion processing task (presented to subjects as a gender discrimination task) while the sound environment was defined either by a) a therapeutic music sequence (MusiCure), b) a noise sequence or c) silence. Individual changes in mood were sampled before and after the task by a computerized questionnaire. Additionally, emotional control and trait anxiety were assessed in a separate session by paper and pencil questionnaires. Results showed a better mood after the MusiCure condition compared with the other experimental conditions and faster responses to happy faces during MusiCure compared with angry faces during Noise. Moreover, individuals with higher trait anxiety were faster in performing the implicit emotion processing task during MusiCure compared with Silence. These findings suggest that sound-induced affective states are associated with differential responses to angry and happy emotional faces at an implicit stage of processing, and that a relaxing sound environment facilitates the implicit emotional processing in anxious individuals. PMID:25072162
An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air
NASA Astrophysics Data System (ADS)
Papacosta, Pangratios; Linscheid, Nathan
2016-01-01
Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the displacement antinodes enables the measurement of the wavelength of the sound that is being used. This paper describes a design that uses a speaker instead of the traditional aluminum rod as the sound source. This allows the use of multiple sound frequencies that yield a much more accurate speed of sound in air.
Perception of Long-Period Complex Sounds
1989-11-27
Richard M. Warren AFOSR Grant No. 88-0320 M CES Guttman, N. & Julesz, B. (1963). Lower limits of auditory periodicity analysis. Journal of the Aostical...order within auditory sequences. Peretion & PsvchobhVsics, 12, 86-90. Watson, C.S., (1987). Uncertainty, informational masking, and the capacity of...immediate memory. In W.A. Yost and C.S. Watson (eds.), Auditory Processing of Camlex Sounds. New Jersey: lawrence Erlbaum Associates, pp. 267-277
NASA Astrophysics Data System (ADS)
Shi, Lei; Yao, Bo; Zhao, Lei; Liu, Xiaotong; Yang, Min; Liu, Yanming
2018-01-01
The plasma sheath-surrounded hypersonic vehicle is a dynamic and time-varying medium and it is almost impossible to calculate time-varying physical parameters directly. The in-fight detection of the time-varying degree is important to understand the dynamic nature of the physical parameters and their effect on re-entry communication. In this paper, a constant envelope zero autocorrelation (CAZAC) sequence based on time-varying frequency detection and channel sounding method is proposed to detect the plasma sheath electronic density time-varying property and wireless channel characteristic. The proposed method utilizes the CAZAC sequence, which has excellent autocorrelation and spread gain characteristics, to realize dynamic time-varying detection/channel sounding under low signal-to-noise ratio in the plasma sheath environment. Theoretical simulation under a typical time-varying radio channel shows that the proposed method is capable of detecting time-variation frequency up to 200 kHz and can trace the channel amplitude and phase in the time domain well under -10 dB. Experimental results conducted in the RF modulation discharge plasma device verified the time variation detection ability in practical dynamic plasma sheath. Meanwhile, nonlinear phenomenon of dynamic plasma sheath on communication signal is observed thorough channel sounding result.
Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis
Fletcher, Phillip D.; Downey, Laura E.; Golden, Hannah L.; Clark, Camilla N.; Slattery, Catherine F.; Paterson, Ross W.; Schott, Jonathan M.; Rohrer, Jonathan D.; Rossor, Martin N.; Warren, Jason D.
2015-01-01
Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music (‘musicophilia’) occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717
Types of Post-Treatment Issues
... a dentist can help with these issues. Tinnitus Tinnitus Tinnitus is the perception of sound when no actual ... commonly referred to as “ringing in the ears,” tinnitus can manifest many different perceptions of sound, including ...
... sound different from the way it normally sounds. Causes Some of these disorders develop gradually, but anyone can develop a speech and language impairment suddenly, usually in a trauma. APHASIA Alzheimer disease Brain tumor (more common in aphasia than ...
New Stethoscope With Extensible Diaphragm.
Takashina, Tsunekazu; Shimizu, Masashi; Muratake, Torakazu; Mayuzumi, Syuichi
2016-08-25
This study compared the diagnostic efficacy of the common suspended diaphragm stethoscope (SDS) with a new extensible diaphragm stethoscope (EDS) for low-frequency heart sounds. The EDS was developed by using an ethylene propylene diene monomer diaphragm. The results showed that the EDS enhanced both the volume and quality of low-frequency heart sounds, and improved the ability of examiners to auscultate such heart sounds. Based on the results of the sound analysis, the EDS is more efficient than the SDS. (Circ J 2016; 80: 2047-2049).
ERIC Educational Resources Information Center
AGARD, FREDERICK B.; DI PIETRO, ROBERT J.
DESIGNED AS A SOURCE OF INFORMATION FOR PROFESSIONALS PREPARING INSTRUCTIONAL MATERIALS, PLANNING COURSES, OR DEVELOPING CLASSROOM TECHNIQUES FOR FOREIGN LANGUAGE PROGRAMS, A SERIES OF STUDIES HAS BEEN PREPARED THAT CONTRASTS, IN TWO VOLUMES FOR EACH OF THE FIVE MOST COMMONLY TAUGHT FOREIGN LANGUAGES IN THE UNITED STATES, THE SOUND AND GRAMMATICAL…
ERIC Educational Resources Information Center
Bremner, Andrew J.; Caparos, Serge; Davidoff, Jules; de Fockert, Jan; Linnell, Karina J.; Spence, Charles
2013-01-01
Western participants consistently match certain shapes with particular speech sounds, tastes, and flavours. Here we demonstrate that the "Bouba-Kiki effect", a well-known shape-sound symbolism effect commonly observed in Western participants, is also observable in the Himba of Northern Namibia, a remote population with little exposure to…
Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.
Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan
2016-01-01
Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.
Hayes, Michael C.; Hays, Richard; Rubin, Stephen P.; Chase, Dorothy M.; Hallock, Molly; Cook-Tabor, Carrie; Luzier, Christina W.; Moser, Mary L.
2013-01-01
Lamprey populations are in decline worldwide and the status of Pacific lamprey (Entosphenus tridentatus) is a topic of current interest. They and other lamprey species cycle nutrients and serve as prey in riverine ecosystems. To determine the current distribution of Pacific lamprey in major watersheds flowing into Puget Sound, Washington, we sampled lamprey captured during salmonid smolt monitoring that occurred from late winter to mid-summer. We found Pacific lamprey in 12 of 18 watersheds and they were most common in southern Puget Sound watersheds and in watersheds draining western Puget Sound (Hood Canal). Two additional species, western brook lamprey (Lampetra richardsoni) and river lamprey (L. ayresii) were more common in eastern Puget Sound watersheds. Few Pacific lamprey macrophthalmia were found, suggesting that the majority of juveniles migrated seaward during other time periods. In addition, “dwarf” adult Pacific lamprey (< 300 mm) were observed in several watersheds and may represent an alternate life history for some Puget Sound populations. Based on genetic data, the use of visual techniques to identify lamprey ammocoetes as Entosphenus or Lampetra was successful for 97% (34 of 35) of the samples we evaluated.
Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments
Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan
2016-01-01
Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768
Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex
Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire
2015-01-01
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269
NASA Astrophysics Data System (ADS)
Hernández, María Isabel; Couso, Digna; Pintó, Roser
2015-04-01
The study we have carried out aims to characterize 15- to 16-year-old students' learning progressions throughout the implementation of a teaching-learning sequence on the acoustic properties of materials. Our purpose is to better understand students' modeling processes about this topic and to identify how the instructional design and actual enactment influences students' learning progressions. This article presents the design principles which elicit the structure and types of modeling and inquiry activities designed to promote students' development of three conceptual models. Some of these activities are enhanced by the use of ICT such as sound level meters connected to data capture systems, which facilitate the measurement of the intensity level of sound emitted by a sound source and transmitted through different materials. Framing this study within the design-based research paradigm, it consists of the experimentation of the designed teaching sequence with two groups of students ( n = 29) in their science classes. The analysis of students' written productions together with classroom observations of the implementation of the teaching sequence allowed characterizing students' development of the conceptual models. Moreover, we could evidence the influence of different modeling and inquiry activities on students' development of the conceptual models, identifying those that have a major impact on students' modeling processes. Having evidenced different levels of development of each conceptual model, our results have been interpreted in terms of the attributes of each conceptual model, the distance between students' preliminary mental models and the intended conceptual models, and the instructional design and enactment.
Acoustic Imaging of Snowpack Physical Properties
NASA Astrophysics Data System (ADS)
Kinar, N. J.; Pomeroy, J. W.
2011-12-01
Measurements of snowpack depth, density, structure and temperature have often been conducted by the use of snowpits and invasive measurement devices. Previous research has shown that acoustic waves passing through snow are capable of measuring these properties. An experimental observation device (SAS2, System for the Acoustic Sounding of Snow) was used to autonomously send audible sound waves into the top of the snowpack and to receive and process the waves reflected from the interior and bottom of the snowpack. A loudspeaker and microphone array separated by an offset distance was suspended in the air above the surface of the snowpack. Sound waves produced from a loudspeaker as frequency-swept sequences and maximum length sequences were used as source signals. Up to 24 microphones measured the audible signal from the snowpack. The signal-to-noise ratio was compared between sequences in the presence of environmental noise contributed by wind and reflections from vegetation. Beamforming algorithms were used to reject spurious reflections and to compensate for movement of the sensor assembly during the time of data collection. A custom-designed circuit with digital signal processing hardware implemented an inversion algorithm to relate the reflected sound wave data to snowpack physical properties and to create a two-dimensional image of snowpack stratigraphy. The low power consumption circuit was powered by batteries and through WiFi and Bluetooth interfaces enabled the display of processed data on a mobile device. Acoustic observations were logged to an SD card after each measurement. The SAS2 system was deployed at remote field locations in the Rocky Mountains of Alberta, Canada. Acoustic snow properties data was compared with data collected from gravimetric sampling, thermocouple arrays, radiometers and snowpit observations of density, stratigraphy and crystal structure. Aspects for further research and limitations of the acoustic sensing system are also discussed.
Sapienza, C M; Crandell, C C; Curtis, B
1999-09-01
Voice problems are a frequent difficulty that teachers experience. Common complaints by teachers include vocal fatigue and hoarseness. One possible explanation for these symptoms is prolonged elevations in vocal loudness within the classroom. This investigation examined the effectiveness of sound-field frequency modulation (FM) amplification on reducing the sound pressure level (SPL) of the teacher's voice during classroom instruction. Specifically, SPL was examined during speech produced in a classroom lecture by 10 teachers with and without the use of sound-field amplification. Results indicated a significant 2.42-dB decrease in SPL with the use of sound-field FM amplification. These data support the use of sound-field amplification in the vocal hygiene regimen recommended to teachers by speech-language pathologists.
On hemispheric differences in evoked potentials to speech stimuli
NASA Technical Reports Server (NTRS)
Galambos, R.; Smith, T. S.; Schulman-Galambos, C.; Osier, H.; Benson, P.
1975-01-01
Subjects were asked to count the number of times a 'target' sound occurred in lists of speech sounds (pa or ba) or pure tones (250 or 600 c/sec) in which one of the sounds (the 'frequent') appeared about four times as often as the target. The response to both targets and frequents were separately averaged from electrodes at vertex at symmetrical left and right parietal locations. The expected sequence of deflections, including P3 waves with about 350 msec latency, was found in the responses to target stimuli. Very little difference was found between the right and left hemispheric responses to speech or pure tones, either frequent or target.
ERIC Educational Resources Information Center
Wagner, Robert W.
This publication contains four film scripts, each comprising from six to eleven short sequences. Each script has a complete shot list and transcript of the soundtrack, which contains narration, interviews, discussions, and synchronous sound from documentary situations. The six sequences in "The Information Explosion" cover the history of…
Lindamood Phonemic Sequencing (LiPS) [R]. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2008
2008-01-01
The Lindamood Phonemic Sequencing (LiPS)[R] program (formerly called the Auditory Discrimination in Depth[R] [ADD] program) is designed to teach students skills to decode words and to identify individual sounds and blends in words. The program is individualized to meet student needs and is often used with students who have learning disabilities or…
USDA-ARS?s Scientific Manuscript database
Genotyping by sequencing allows for large-scale genetic analyses in plant species with no reference genome, but sets the challenge of sound inference in presence of uncertain genotypes. We report an imputation-based genome-wide association study (GWAS) in reed canarygrass (Phalaris arundinacea L., P...
An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air
ERIC Educational Resources Information Center
Papacosta, Pangratios; Linscheid, Nathan
2016-01-01
Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the…
NASA Technical Reports Server (NTRS)
Diak, George R.; Smith, William L.
1993-01-01
The goals of this research endeavor have been to develop a flexible and relatively complete framework for the investigation of current and future satellite data sources in numerical meteorology. In order to realistically model how satellite information might be used for these purposes, it is necessary that Observing System Simulation Experiments (OSSEs) be as complete as possible. It is therefore desirable that these experiments simulate in entirety the sequence of steps involved in bringing satellite information from the radiance level through product retrieval to a realistic analysis and forecast sequence. In this project we have worked to make this sequence realistic by synthesizing raw satellite data from surrogate atmospheres, deriving satellite products from these data and subsequently producing analyses and forecasts using the retrieved products. The accomplishments made in 1991 are presented. The emphasis was on examining atmospheric soundings and microphysical products which we expect to produce with the launch of the Advanced Microwave Sounding Unit (AMSU), slated for flight in mid 1994.
Airborne sound transmission loss characteristics of wood-frame construction
NASA Astrophysics Data System (ADS)
Rudder, F. F., Jr.
1985-03-01
This report summarizes the available data on the airborne sound transmission loss properties of wood-frame construction and evaluates the methods for predicting the airborne sound transmission loss. The first part of the report comprises a summary of sound transmission loss data for wood-frame interior walls and floor-ceiling construction. Data bases describing the sound transmission loss characteristics of other building components, such as windows and doors, are discussed. The second part of the report presents the prediction of the sound transmission loss of wood-frame construction. Appropriate calculation methods are described both for single-panel and for double-panel construction with sound absorption material in the cavity. With available methods, single-panel construction and double-panel construction with the panels connected by studs may be adequately characterized. Technical appendices are included that summarize laboratory measurements, compare measurement with theory, describe details of the prediction methods, and present sound transmission loss data for common building materials.
Shaping reverberating sound fields with an actively tunable metasurface.
Ma, Guancong; Fan, Xiying; Sheng, Ping; Fink, Mathias
2018-06-26
A reverberating environment is a common complex medium for airborne sound, with familiar examples such as music halls and lecture theaters. The complexity of reverberating sound fields has hindered their meaningful control. Here, by combining acoustic metasurface and adaptive wavefield shaping, we demonstrate the versatile control of reverberating sound fields in a room. This is achieved through the design and the realization of a binary phase-modulating spatial sound modulator that is based on an actively reconfigurable acoustic metasurface. We demonstrate useful functionalities including the creation of quiet zones and hotspots in a typical reverberating environment. Copyright © 2018 the Author(s). Published by PNAS.
Discrete Huygens’ modeling for the characterization of a sound absorbing medium
NASA Astrophysics Data System (ADS)
Chai, L.; Kagawa, Y.
2007-07-01
Based on the equivalence between the wave propagation in the electrical transmission-lines and acoustic tubes, the authors proposed the use of the transmission-line matrix modeling (TLM) for time-domain solution method of the sound field. TLM is known in electromagnetic engineering community, which is equivalent to the discrete Huygens' modeling. The wave propagation is simulated by tracing the sequences of the transmission and scattering of impulses. The theory and the demonstrated examples are presented in the references, in which a sound absorbing field was preliminarily considered to be a medium with simple acoustic resistance independent of frequency and the angle of incidence for the absorbing layer placed on the room wall surface. The present work is concerned with the time-domain response for the characterization of the sound absorbing materials. A lossy component with variable propagation velocity is introduced for sound absorbing materials to facilitate the energy consumption. The frequency characteristics of the absorption coefficient are also considered for the normal, oblique and random incidence. Some numerical demonstrations show that the present modeling provide a reasonable modeling of the homogeneous sound absorbing materials in time domain.
Novel and canine genotypes of Giardia duodenalis in harbor seals ( Phoca vitulina richardsi).
Gaydos, J K; Miller, W A; Johnson, C; Zornetzer, H; Melli, A; Packham, A; Jeffries, S J; Lance, M M; Conrad, P A
2008-12-01
Feces of harbor seals (Phoca vitulina richardsi) and hybrid glaucous-winged/western gulls (Larus glaucescens / occidentalis) from Washington State's inland marine waters were examined for Giardia and Cryptosporidium spp. to determine if genotypes carried by these wildlife species were the same genotypes that commonly infect humans and domestic animals. Using immunomagnetic separation followed by direct fluorescent antibody detection, Giardia spp. cysts were detected in 42% of seal fecal samples (41/97). Giardia-positive samples came from 90% of the sites (9/10) and the prevalence of positive seal fecal samples differed significantly among study sites. Fecal samples collected from seal haulout sites with over 400 animals were 4.7 times more likely to have Giardia spp. cysts than samples collected at smaller haulout sites. In gulls, a single Giardia sp. cyst was detected in 4% of fecal samples (3/78). Cryptosporidium spp. oocysts were not detected in any of the seals or gulls tested. Sequence analysis of a 398 bp segment of G. duodenalis DNA at the glutamate dehydrogenase locus suggested that 11 isolates originating from seals throughout the region were a novel genotype and 3 isolates obtained from a single site in south Puget Sound were the G. duodenalis canine genotype D. Real-time TaqMan PCR amplification and subsequent sequencing of a 52 bp small subunit ribosomal DNA region from novel harbor seal genotype isolates showed sequence homology to canine genotypes C and D. Sequence analysis of the 52 bp small subunit ribosomal DNA products from the 3 canine genotype isolates from seals produced mixed sequences at could not be evaluated.
Sediment Acoustics: Wideband Model, Reflection Loss and Ambient Noise Inversion
2009-09-30
between 1 and 10 kHz. The model is also capable of explaining the apparent discrepancy between the data and the Kramers- Kronig relationship (K-K...of in-situ measurements of sediment sound speed and attenuation from SAX99, SAX04 and SW06 with the commonly used Kramers- Kronig equation (black...inverse quality factor. The data is overlaid by the Kramers- Kronig estimate of sound speed from measured attenuation, by both the commonly used equation
Disruption of Boundary Encoding During Sensorimotor Sequence Learning: An MEG Study.
Michail, Georgios; Nikulin, Vadim V; Curio, Gabriel; Maess, Burkhard; Herrojo Ruiz, María
2018-01-01
Music performance relies on the ability to learn and execute actions and their associated sounds. The process of learning these auditory-motor contingencies depends on the proper encoding of the serial order of the actions and sounds. Among the different serial positions of a behavioral sequence, the first and last (boundary) elements are particularly relevant. Animal and patient studies have demonstrated a specific neural representation for boundary elements in prefrontal cortical regions and in the basal ganglia, highlighting the relevance of their proper encoding. The neural mechanisms underlying the encoding of sequence boundaries in the general human population remain, however, largely unknown. In this study, we examined how alterations of auditory feedback, introduced at different ordinal positions (boundary or within-sequence element), affect the neural and behavioral responses during sensorimotor sequence learning. Analysing the neuromagnetic signals from 20 participants while they performed short piano sequences under the occasional effect of altered feedback (AF), we found that at around 150-200 ms post-keystroke, the neural activities in the dorsolateral prefrontal cortex (DLPFC) and supplementary motor area (SMA) were dissociated for boundary and within-sequence elements. Furthermore, the behavioral data demonstrated that feedback alterations on boundaries led to greater performance costs, such as more errors in the subsequent keystrokes. These findings jointly support the idea that the proper encoding of boundaries is critical in acquiring sensorimotor sequences. They also provide evidence for the involvement of a distinct neural circuitry in humans including prefrontal and higher-order motor areas during the encoding of the different classes of serial order.
Distinct Element Method modelling of fold-related fractures in a multilayer sequence
NASA Astrophysics Data System (ADS)
Kaserer, Klemens; Schöpfer, Martin P. J.; Grasemann, Bernhard
2017-04-01
Natural fractures have a significant impact on the performance of hydrocarbon systems/reservoirs. In a multilayer sequence, both the fracture density within the individual layers and the type of fracture intersection with bedding contacts are key parameters controlling fluid pathways. In the present study the influence of layer stacking and interlayer friction on fracture density and connectivity within a folded sequence is systematically investigated using 2D Distinct Element Method modelling. Our numerical approach permits forward modelling of both fracture nucleation/propagation/arrest and (contemporaneous) frictional slip along bedding planes in a robust and mechanically sound manner. Folding of the multilayer sequence is achieved by enforcing constant curvature folding by means of a velocity boundary condition at the model base, while a constant overburden pressure is maintained at the model top. The modelling reveals that with high bedding plane friction the multilayer stack behaves mechanically as a single layer so that the neutral surface develops in centre of the sequence and fracture spacing is controlled by the total thickness of the folded sequence. In contrast, low bedding plane friction leads to decoupling of the individual layers (flexural slip folding) so that a neutral surface develops in the centre of each layer and fracture spacing is controlled by the thickness of the individual layers. The low interfacial friction models illustrate that stepping of fractures across bedding planes is a common process, which can however have two contrasting origins: The mechanical properties of the interface cause fracture stepping during fracture propagation. Originally through-going fractures are later offset by interfacial slip during folding. A combination of these two different origins may lead to (apparently) inconsistent fracture offsets across bedding planes within a flexural slip fold.
NASA Astrophysics Data System (ADS)
Kochiyama, Jiro; Kinai, Shigeki; Morita, Shinya
The TR-IA microgravity-experimentation sounding rocket baseline configuration and recovery system are presented. Aerodynamic braking is incorporated through the requisite positioning of the reentry-body center of gravity. The recovery sequence is initiated by baroswitches, which eject the pilot chute. Even in the event of flotation bag malfunction, the structure containing the experiment is watertight. An account is given of the nature and the results of the performance tests conducted to establish the soundness of various materials and components.
Teaching Formulaic Sequences in the Classroom: Effects on Spoken Fluency
ERIC Educational Resources Information Center
McGuire, Michael; Larson-Hall, Jenifer
2017-01-01
Formulaic sequences (FS) are frequently used by native speakers and have been found to help non-native speakers sound more fluent as well. We hypothesized that explicitly teaching FS to classroom ESL learners would increase the use of such language, which could further result in increased second language (L2) fluency. We report on a 5-week study…
A novel method for detecting airway narrowing using breath sound spectrum analysis in children.
Tabata, Hideyuki; Hirayama, Mariko; Enseki, Mayumi; Nukaga, Mariko; Hirai, Kota; Furuya, Hiroyuki; Mochizuki, Hiroyuki
2016-01-01
Using a breath sound analyzer, we investigated new clinical parameters that are rarely affected by airflow in young children. A total of 65 children with asthma participated in this study (mean age 9.6 years). In Study 1, the intra- and inter-observer variability was measured. Common breath sound parameters, frequency at 99%, 75%, and 50% of the maximum frequency (F99, F75, and F50) and the highest frequency of inspiratory breath sounds were calculated. In addition, new parameters obtained using the ratio of sound spectra parameters, i.e., the spectrum curve indexes including the ratio of the third and fourth area to the total area and the ratio of power and frequency at F75 and F50, were calculated. In Study 2, 51 children underwent breath sound analyses. In Study 3, breath sounds were studied before and after methacholine inhalation. In Study 1, the data showed good inter- and intra-observer reliability. In Study 2, there were significant relationships between the airflow rate, age, height, and spirometric and common breath sound parameters. However, there were no significant relationships between the airflow rate and the spectrum curve indexes. Moreover, the spectrum curve indexes showed no relationships with age, height, or spirometric parameters. In Study 3, all parameters significantly changed after methacholine inhalation. Some spectrum curve indexes are not significantly affected by the airflow rate at the mouth, although they successfully indicate airway narrowing. These parameters may play a role in the assessment of bronchoconstriction in children. Copyright © 2015 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
Continuous robust sound event classification using time-frequency features and deep learning
Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478
Continuous robust sound event classification using time-frequency features and deep learning.
McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
Gaydos, J K; Miller, W A; Gilardi, K V K; Melli, A; Schwantje, H; Engelstoft, C; Fritz, H; Conrad, P A
2007-02-01
Species of Cryptosporidium and Giardia can infect humans and wildlife and have the potential to be transmitted between these 2 groups; yet, very little is known about these protozoans in marine wildlife. Feces of river otters (Lontra canadensis), a common marine wildlife species in the Puget Sound Georgia Basin, were examined for species of Cryptosporidium and Giardia to determine their role in the epidemiology of these pathogens. Using ZnSO4 flotation and immunomagnetic separation, followed by direct immunofluorescent antibody detection (IMS/DFA), we identified Cryptosporidium sp. oocysts in 9 fecal samples from 6 locations and Giardia sp. cysts in 11 fecal samples from 7 locations. The putative risk factors of proximate human population and degree of anthropogenic shoreline modification were not associated with the detection of Cryptosporidium or Giardia spp. in river otter feces. Amplification of DNA from the IMS/DFA slide scrapings was successful for 1 sample containing > 500 Cryptosporidium sp. oocysts. Sequences from the Cryptosporidium 18S rRNA and the COWP loci were most similar to the ferret Cryptosporidium sp. genotype. River otters could serve as reservoirs for Cryptosporidium and Giardia species in marine ecosystems. More work is needed to better understand the zoonotic potential of the genotypes they carry as well as their implications for river otter health.
Rare variants in axonogenesis genes connect three families with sound-color synesthesia.
Tilot, Amanda K; Kucera, Katerina S; Vino, Arianna; Asher, Julian E; Baron-Cohen, Simon; Fisher, Simon E
2018-03-20
Synesthesia is a rare nonpathological phenomenon where stimulation of one sense automatically provokes a secondary perception in another. Hypothesized to result from differences in cortical wiring during development, synesthetes show atypical structural and functional neural connectivity, but the underlying molecular mechanisms are unknown. The trait also appears to be more common among people with autism spectrum disorder and savant abilities. Previous linkage studies searching for shared loci of large effect size across multiple families have had limited success. To address the critical lack of candidate genes, we applied whole-exome sequencing to three families with sound-color (auditory-visual) synesthesia affecting multiple relatives across three or more generations. We identified rare genetic variants that fully cosegregate with synesthesia in each family, uncovering 37 genes of interest. Consistent with reports indicating genetic heterogeneity, no variants were shared across families. Gene ontology analyses highlighted six genes- COL4A1 , ITGA2 , MYO10 , ROBO3 , SLC9A6 , and SLIT2 -associated with axonogenesis and expressed during early childhood when synesthetic associations are formed. These results are consistent with neuroimaging-based hypotheses about the role of hyperconnectivity in the etiology of synesthesia and offer a potential entry point into the neurobiology that organizes our sensory experiences. Copyright © 2018 the Author(s). Published by PNAS.
Impaired Statistical Learning in Developmental Dyslexia
Thiessen, Erik D.; Holt, Lori L.
2015-01-01
Purpose Developmental dyslexia (DD) is commonly thought to arise from phonological impairments. However, an emerging perspective is that a more general procedural learning deficit, not specific to phonological processing, may underlie DD. The current study examined if individuals with DD are capable of extracting statistical regularities across sequences of passively experienced speech and nonspeech sounds. Such statistical learning is believed to be domain-general, to draw upon procedural learning systems, and to relate to language outcomes. Method DD and control groups were familiarized with a continuous stream of syllables or sine-wave tones, the ordering of which was defined by high or low transitional probabilities across adjacent stimulus pairs. Participants subsequently judged two 3-stimulus test items with either high or low statistical coherence as being the most similar to the sounds heard during familiarization. Results As with control participants, the DD group was sensitive to the transitional probability structure of the familiarization materials as evidenced by above-chance performance. However, the performance of participants with DD was significantly poorer than controls across linguistic and nonlinguistic stimuli. In addition, reading-related measures were significantly correlated with statistical learning performance of both speech and nonspeech material. Conclusion Results are discussed in light of procedural learning impairments among participants with DD. PMID:25860795
Sanchez, Tanit Ganz; Silva, Fúlvia Eduarda da
2017-07-29
Misophonia is a recently described, poorly understood and neglected condition. It is characterized by strong negative reactions of hatred, anger or fear when subjects have to face some selective and low level repetitive sounds. The most common ones that trigger such aversive reactions are those elicited by the mouth (chewing gum or food, popping lips) or the nose (breathing, sniffing, and blowing) or by the fingers (typing, kneading paper, clicking pen, drumming on the table). Previous articles have cited that such individuals usually know at least one close relative with similar symptoms, suggesting a possible hereditary component. we found and described a family with 15 members having misophonia, detailing their common characteristics and the pattern of sounds that trigger such strong discomfort. All 15 members agreed to give us their epidemiological data, and 12 agreed to answer a specific questionnaire which investigated the symptoms, specific trigger sounds, main feelings evoked and attitudes adopted by each participant. The 15 members belong to three generations of the family. Their age ranged from 9 to 73 years (mean 38.3 years; median 41 years) and 10 were females. Analysis of the 12 questionnaires showed that 10 subjects (83.3%) developed the first symptoms during childhood or adolescence. The mean annoyance score on the Visual Analog Scale from 0 to 10 was 7.3 (median 7.5). Individuals reported hatred/anger, irritability and anxiety in response to sounds, and faced the situation asking to stop the sound, leaving/avoiding the place and even fighting. The self-reported associated symptoms were anxiety (91.3%), tinnitus (50%), obsessive-compulsive disorder (41.6%), depression (33.3%), and hypersensitivity to sounds (25%). The high incidence of misophonia in this particular familial distribution suggests that it might be more common than expected and raises the possibility of having a hereditary etiology. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Situational Lightning Climatologies for Central Florida: Phase III
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III
2008-01-01
This report describes work done by the Applied Meteorology Unit (AMU) to add composite soundings to the Advanced Weather Interactive Processing System (AWIPS). This allows National Weather Service (NWS) forecasters to compare the current atmospheric state with climatology. In a previous phase, the AMU created composite soundings for four rawinsonde observation stations in Florida, for each of eight flow regimes. The composite soundings were delivered to the NWS Melbourne (MLB) office for display using the NSHARP software program. NWS MLB requested that the AMU make the composite soundings available for display in AWIPS. The AMU first created a procedure to customize AWIPS so composite soundings could be displayed. A unique four-character identifier was created for each of the 32 composite soundings. The AMU wrote a Tool Command Language/Tool Kit (TcVTk) software program to convert the composite soundings from NSHARP to Network Common Data Form (NetCDF) format. The NetCDF files were then displayable by AWIPS.
Categorization of common sounds by cochlear implanted and normal hearing adults.
Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P
2016-05-01
Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluating the performance of active noise control systems in commercial and industrial applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Depies, C.; Deneen, S.; Lowe, M.
1995-06-01
Active sound cancellation technology is increasingly being used to quiet commercial and industrial air-moving devices. Engineers and designers are implementing active or combination active/passive technology to control sound quality in the workplace and the acoustical environment in residential areas near industrial facilities. Sound level measurements made before and after the installation of active systems have proved that significant improvements in sound quality can be obtained even if there is little or no change in the NC/RC or dBA numbers. Noise produced by centrifugal and vane-axial fans, pumps and blowers, commonly used for ventilation and material movement in industry, are frequentlymore » dominated by high amplitude, tonal noise at low frequencies. And the low-frequency noise produced by commercial air handlers often has less tonal and more broadband characteristics, resulting in audible duct rumble noise and objectionable room spectrums. Because the A-weighting network, which is commonly used for industrial noise measurements, de-emphasizes low frequencies, its single number rating can be misleading in terms of judging the overall subjective sound quality in impacted areas and assessing the effectiveness of noise control measures. Similarly, NC values, traditionally used for commercial HVAC acoustical design criteria, can be governed by noise at any frequency and cannot accurately depict human judgment of the aural comfort level. Analyses of frequency spectrum characteristics provide the most effective means of assessing sound quality and determining mitigative measures for achieving suitable background sound levels.« less
Surface Warfare: A Total Force. Volume 19. Number 4, July/August 1994
1994-08-01
USS Puget Sound (AD 38) (AOE 1) Support USS Grasp (ARS 51) Combat Logistics (med/small) USS Mauna Lea (AE 22) COMINEWARCOM: Repair USS Acadia (AD 42...structure is Operational Reserve Carrier ( ORC ). When USS John being shaped to expand this role. In the recently P. Eznnedy (CV 67) joins the NRF in...A typical Aegis engagement sequence begins as a engineer and test the fought in the waters of Port RoyalAegis Combat System. Sound and on the adjacent
NASA Technical Reports Server (NTRS)
Platt, R.
1999-01-01
This is the Performance Verification Report, Final Comprehensive Performance Test (CPT) Report, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). This specification establishes the requirements for the CPT and Limited Performance Test (LPT) of the AMSU-1A, referred to here in as the unit. The sequence in which the several phases of this test procedure shall take place is shown.
NASA Astrophysics Data System (ADS)
Strinna, Elisa; Ferrari, Graziano
2015-04-01
The project started in 2008 as a sound installation, a collaboration between an artist, a barrel organ builder and a seismologist. The work differs from other attempts of sound transposition of seismic records. In this case seismic frequencies are not converted automatically into the "sound of the earthquake." However, it has been studied a musical translation system that, based on the organ tonal scale, generates a totally unexpected sequence of sounds which is intended to evoke the emotions aroused by the earthquake. The symphonies proposed in the project have somewhat peculiar origins: they in fact come to life from the translation of graphic tracks into a sound track. The graphic tracks in question are made up by copies of seismograms recorded during some earthquakes that have taken place around the world. Seismograms are translated into music by a sculpture-instrument, half a seismograph and half a barrel organ. The organ plays through holes practiced on paper. Adapting the documents to the instrument score, holes have been drilled on the waves' peaks. The organ covers about three tonal scales, starting from heavy and deep sounds it reaches up to high and jarring notes. The translation of the seismic records is based on a criterion that does match the highest sounds to larger amplitudes with lower ones to minors. Translating the seismogram in the organ score, the larger the amplitude of recorded waves, the more the seismogram covers the full tonal scale played by the barrel organ and the notes arouse an intense emotional response in the listener. Elisa Strinna's Seismic Symphonies installation becomes an unprecedented tool for emotional involvement, through which can be revived the memory of the greatest disasters of over a century of seismic history of the Earth. A bridge between art and science. Seismic Symphonies is also a symbolic inversion: the instrument of the organ is most commonly used in churches, and its sounds are derived from the heavens and symbolize cosmic harmony. But here it is the earth, "nature", the ground beneath our feet that is moving. It speaks to us not of harmony, but of our fragility. For the oldest earthquakes considered, Seismic Symphonies drew on SISMOS archives, the INGV project for recovery, high resolution digital reproduction and distribution of the seismograms of earthquakes of the Euro-Mediterranean area from 1895 to 1984. After the first exposure to the Fondazione Bevilacqua La Masa in Venice, the organ was later exhibited in Taiwan, the Taipei Biennial, with seismograms provided from the Taiwanese Central Weather Bureau, and at the EACC Castello in Spain, with seismograms of Spanish earthquakes provided by the Instituto Geográfico Nacional.
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
Johnson, Nicholas S.; Higgs, Dennis; Binder, Thomas R.; Marsden, J. Ellen; Buchinger, Tyler John; Brege, Linnea; Bruning, Tyler; Farha, Steve A.; Krueger, Charles C.
2018-01-01
Two sounds associated with spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain were characterized by comparing sound recordings to behavioral data collected using acoustic telemetry and video. These sounds were named growls and snaps, and were heard on lake trout spawning reefs, but not on a non-spawning reef, and were more common at night than during the day. Growls also occurred more often during the spawning period than the pre-spawning period, while the trend for snaps was reversed. In a laboratory flume, sounds occurred when male lake trout were displaying spawning behaviors; growls when males were quivering and parallel swimming, and snaps when males moved their jaw. Combining our results with the observation of possible sound production by spawning splake (Salvelinus fontinalis × Salvelinus namaycush hybrid), provides rare evidence for spawning-related sound production by a salmonid, or any other fish in the superorder Protacanthopterygii. Further characterization of these sounds could be useful for lake trout assessment, restoration, and control.
Pitch features of environmental sounds
NASA Astrophysics Data System (ADS)
Yang, Ming; Kang, Jian
2016-07-01
A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.
Annoyance resulting from intrusion of aircraft sounds upon various activities
NASA Technical Reports Server (NTRS)
Gunn, W. J.; Shepherd, W. T.; Fletcher, J. L.
1975-01-01
An experiment was conducted in which subjects were engaged in TV viewing, telephone listening, or reverie (no activity) for a 1/2-hour session. During the session, they were exposed to a series of recorded aircraft sounds at the rate of one flight every 2 minutes. Within each session, four levels of flyover noise, separated by dB increments, were presented several times in a Latin Square balanced sequence. The peak level of the noisiest flyover in any session was fixed at 95, 90, 85, 75, or 70 dBA. At the end of the test session, subjects recorded their responses to the aircraft sounds, using a bipolar scale which covered the range from very pleasant to extremely annoying. Responses to aircraft noises were found to be significantly affected by the particular activity in which the subjects were engaged. Not all subjects found the aircraft sounds to be annoying.
Peter, Beate; Matsushita, Mark; Raskind, Wendy H
2012-10-01
The aim of this pilot study was to investigate a measure of motor sequencing deficit as a potential endophenotype of speech sound disorder (SSD) in a multigenerational family with evidence of familial SSD. In a multigenerational family with evidence of a familial motor-based SSD, affectation status and a measure of motor sequencing during oral motor testing were obtained. To further investigate the role of motor sequencing as an endophenotype for genetic studies, parametric and nonparametric linkage analyses were carried out using a genome-wide panel of 404 microsatellites. In seven of the 10 family members with available data, SSD affectation status and motor sequencing status coincided. Linkage analysis revealed four regions of interest, 6p21, 7q32, 7q36, and 8q24, primarily identified with the measure of motor sequencing ability. The 6p21 region overlaps with a locus implicated in rapid alternating naming in a recent genome-wide dyslexia linkage study. The 7q32 locus contains a locus implicated in dyslexia. The 7q36 locus borders on a gene known to affect the component traits of language impairment. The results are consistent with a motor-based endophenotype of SSD that would be informative for genetic studies. The linkage results in this first genome-wide study in a multigenerational family with SSD warrant follow-up in additional families and with fine mapping or next-generation approaches to gene identification.
Peter, Beate; Matsushita, Mark; Raskind, Wendy H.
2012-01-01
Objectives The purpose of this pilot study was to investigate a measure of motor sequencing deficit as a potential endophenotype of speech sound disorder (SSD) in a multigenerational family with evidence of familial SSD. Methods In a multigenerational family with evidence of a familial motor-based SSD, affectation status and a measure of motor sequencing during oral motor testing were obtained. To further investigate the role of motor sequencing as an endophenotype for genetic studies, parametric and nonparametric linkage analyses were conducted using a genome-wide panel of 404 microsatellites. Results In seven of the ten family members with available data, SSD affectation status and motor sequencing status coincided. Linkage analysis revealed four regions of interest, 6p21, 7q32, 7q36, and 8q24, primarily identified with the measure of motor sequencing ability. The 6p21 region overlaps with a locus implicated in rapid alternating naming in a recent genome-wide dyslexia linkage study. The 7q32 locus contains a locus implicated in dyslexia. The 7q36 locus borders on a gene known to affect component traits of language impairment. Conclusions Results are consistent with a motor-based endophenotype of SSD that would be informative for genetic studies. The linkage results in this first genome-wide study in a multigenerational family with SSD warrant follow-up in additional families and with fine mapping or next-generation approaches to gene identification. PMID:22517379
Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming
Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.
2013-01-01
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming. PMID:23516340
NASA Astrophysics Data System (ADS)
Hall, T.; Wilson, T. J.; Henrys, S.; Speece, M. A.
2016-12-01
The interplay of tectonics and climate is recorded in the sedimentary strata within Victoria Land Basin, McMurdo Sound, Antarctica. Patterns of Cenozoic sedimentation are documented from interpretation of seismic reflection profiles calibrated by drillhole data in McMurdo Sound, and these patterns provide enhanced constraints on the evolution of the coupled Transantarctic Mountains-West Antarctic Rift System and on ice sheet advance/retreat through multiple climate cycles. The research focuses on shifts from warm based to cold based ice sheets through the variable climate and ice sheet conditions that characterized the early to middle Miocene. The study seeks to test the view that cold based ice sheets in arid, polar deserts minimally erode the landscape by calculating sediment volumes for critical climatic intervals. Revised seismic mapping through McMurdo Sound has been completed, utilizing the seismic stratigraphic framework first established by Fielding et al. (2006) and new reflectors marking unconformities identified from the AND-2A core (Levy et al., 2016). Reflector age constraints are derived by tying surfaces to the Cape Roberts Project, CIROS-1, and AND-2A drillholes. Seismic facies coupled with AND-2A core provenance information provides insight into depositional mechanisms and ice sheet behavior. Seismic facies transitions occur across the major unconformity surfaces in the AND-2A core. Sediment volume calculations for subareas within McMurdo Sound where reflectors are most continuous indicate substantial decreases in preserved sediment volume between the Oligocene and Early Miocene sequences, and between the early and mid-Miocene sequences. Sediment volumes, used in combination with an ice sheet model in a backstacking procedure, provide constraints on landscape modification and further understanding of how landscapes erode under warm and cold based ice sheet regimes.
Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm
Höhne, Johannes; Tangermann, Michael
2014-01-01
Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978
Similarities between the irrelevant sound effect and the suffix effect.
Hanley, J Richard; Bourgaize, Jake
2018-03-29
Although articulatory suppression abolishes the effect of irrelevant sound (ISE) on serial recall when sequences are presented visually, the effect persists with auditory presentation of list items. Two experiments were designed to test the claim that, when articulation is suppressed, the effect of irrelevant sound on the retention of auditory lists resembles a suffix effect. A suffix is a spoken word that immediately follows the final item in a list. Even though participants are told to ignore it, the suffix impairs serial recall of auditory lists. In Experiment 1, the irrelevant sound consisted of instrumental music. The music generated a significant ISE that was abolished by articulatory suppression. It therefore appears that, when articulation is suppressed, irrelevant sound must contain speech for it to have any effect on recall. This is consistent with what is known about the suffix effect. In Experiment 2, the effect of irrelevant sound under articulatory suppression was greater when the irrelevant sound was spoken by the same voice that presented the list items. This outcome is again consistent with the known characteristics of the suffix effect. It therefore appears that, when rehearsal is suppressed, irrelevant sound disrupts the acoustic-perceptual encoding of auditorily presented list items. There is no evidence that the persistence of the ISE under suppression is a result of interference to the representation of list items in a postcategorical phonological store.
Neuro-cognitive aspects of "OM" sound/syllable perception: A functional neuroimaging study.
Kumar, Uttam; Guleria, Anupam; Khetrapal, Chunni Lal
2015-01-01
The sound "OM" is believed to bring mental peace and calm. The cortical activation associated with listening to sound "OM" in contrast to similar non-meaningful sound (TOM) and listening to a meaningful Hindi word (AAM) has been investigated using functional magnetic resonance imaging (MRI). The behaviour interleaved gradient technique was employed in order to avoid interference of scanner noise. The results reveal that listening to "OM" sound in contrast to the meaningful Hindi word condition activates areas of bilateral cerebellum, left middle frontal gyrus (dorsolateral middle frontal/BA 9), right precuneus (BA 5) and right supramarginal gyrus (SMG). Listening to "OM" sound in contrast to "non-meaningful" sound condition leads to cortical activation in bilateral middle frontal (BA9), right middle temporal (BA37), right angular gyrus (BA 40), right SMG and right superior middle frontal gyrus (BA 8). The conjunction analysis reveals that the common neural regions activated in listening to "OM" sound during both conditions are middle frontal (left dorsolateral middle frontal cortex) and right SMG. The results correspond to the fact that listening to "OM" sound recruits neural systems implicated in emotional empathy.
An Exploration of Rhythmic Grouping of Speech Sequences by French- and German-Learning Infants
Abboub, Nawal; Boll-Avetisyan, Natalie; Bhatara, Anjali; Höhle, Barbara; Nazzi, Thierry
2016-01-01
Rhythm in music and speech can be characterized by a constellation of several acoustic cues. Individually, these cues have different effects on rhythmic perception: sequences of sounds alternating in duration are perceived as short-long pairs (weak-strong/iambic pattern), whereas sequences of sounds alternating in intensity or pitch are perceived as loud-soft, or high-low pairs (strong-weak/trochaic pattern). This perceptual bias—called the Iambic-Trochaic Law (ITL)–has been claimed to be an universal property of the auditory system applying in both the music and the language domains. Recent studies have shown that language experience can modulate the effects of the ITL on rhythmic perception of both speech and non-speech sequences in adults, and of non-speech sequences in 7.5-month-old infants. The goal of the present study was to explore whether language experience also modulates infants’ grouping of speech. To do so, we presented sequences of syllables to monolingual French- and German-learning 7.5-month-olds. Using the Headturn Preference Procedure (HPP), we examined whether they were able to perceive a rhythmic structure in sequences of syllables that alternated in duration, pitch, or intensity. Our findings show that both French- and German-learning infants perceived a rhythmic structure when it was cued by duration or pitch but not intensity. Our findings also show differences in how these infants use duration and pitch cues to group syllable sequences, suggesting that pitch cues were the easier ones to use. Moreover, performance did not differ across languages, failing to reveal early language effects on rhythmic perception. These results contribute to our understanding of the origin of rhythmic perception and perceptual mechanisms shared across music and speech, which may bootstrap language acquisition. PMID:27378887
Neighing, barking, and drumming horses-object related sounds help and hinder picture naming.
Mädebach, Andreas; Wöhner, Stefan; Kieseler, Marie-Luise; Jescheniak, Jörg D
2017-09-01
The study presented here investigated how environmental sounds influence picture naming. In a series of four experiments participants named pictures (e.g., the picture of a horse) while hearing task-irrelevant sounds (e.g., neighing, barking, or drumming). Experiments 1 and 2 established two findings, facilitation from congruent sounds (e.g., picture: horse, sound: neighing) and interference from semantically related sounds (e.g., sound: barking), both relative to unrelated sounds (e.g., sound: drumming). Experiment 3 replicated the effects in a situation in which participants were not familiarized with the sounds prior to the experiment. Experiment 4 replicated the congruency facilitation effect, but showed that semantic interference was not obtained with distractor sounds which were not associated with target pictures (i.e., were not part of the response set). The general pattern of facilitation from congruent sound distractors and interference from semantically related sound distractors resembles the pattern commonly observed with distractor words. This parallelism suggests that the underlying processes are not specific to either distractor words or distractor sounds but instead reflect general aspects of semantic-lexical selection in language production. The results indicate that language production theories need to include a competitive selection mechanism at either the lexical processing stage, or the prelexical processing stage, or both. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sound-field measurement with moving microphones
Katzberg, Fabrice; Mazur, Radoslaw; Maass, Marco; Koch, Philipp; Mertins, Alfred
2017-01-01
Closed-room scenarios are characterized by reverberation, which decreases the performance of applications such as hands-free teleconferencing and multichannel sound reproduction. However, exact knowledge of the sound field inside a volume of interest enables the compensation of room effects and allows for a performance improvement within a wide range of applications. The sampling of sound fields involves the measurement of spatially dependent room impulse responses, where the Nyquist-Shannon sampling theorem applies in the temporal and spatial domains. The spatial measurement often requires a huge number of sampling points and entails other difficulties, such as the need for exact calibration of a large number of microphones. In this paper, a method for measuring sound fields using moving microphones is presented. The number of microphones is customizable, allowing for a tradeoff between hardware effort and measurement time. The goal is to reconstruct room impulse responses on a regular grid from data acquired with microphones between grid positions, in general. For this, the sound field at equidistant positions is related to the measurements taken along the microphone trajectories via spatial interpolation. The benefits of using perfect sequences for excitation, a multigrid recovery, and the prospects for reconstruction by compressed sensing are presented. PMID:28599533
2013-01-01
Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126
Neural responses to sounds presented on and off the beat of ecologically valid music
Tierney, Adam; Kraus, Nina
2013-01-01
The tracking of rhythmic structure is a vital component of speech and music perception. It is known that sequences of identical sounds can give rise to the percept of alternating strong and weak sounds, and that this percept is linked to enhanced cortical and oscillatory responses. The neural correlates of the perception of rhythm elicited by ecologically valid, complex stimuli, however, remain unexplored. Here we report the effects of a stimulus' alignment with the beat on the brain's processing of sound. Human subjects listened to short popular music pieces while simultaneously hearing a target sound. Cortical and brainstem electrophysiological onset responses to the sound were enhanced when it was presented on the beat of the music, as opposed to shifted away from it. Moreover, the size of the effect of alignment with the beat on the cortical response correlated strongly with the ability to tap to a beat, suggesting that the ability to synchronize to the beat of simple isochronous stimuli and the ability to track the beat of complex, ecologically valid stimuli may rely on overlapping neural resources. These results suggest that the perception of musical rhythm may have robust effects on processing throughout the auditory system. PMID:23717268
Phase-Specific Vocalizations of Male Mice at the Initial Encounter during the Courtship Sequence
Matsumoto, Yui K.; Okanoya, Kazuo
2016-01-01
Mice produce ultrasonic vocalizations featuring a variety of syllables. Vocalizations are observed during social interactions. In particular, males produce numerous syllables during courtship. Previous studies have shown that vocalizations change according to sexual behavior, suggesting that males vary their vocalizations depending on the phase of the courtship sequence. To examine this process, we recorded large sets of mouse vocalizations during male–female interactions and acoustically categorized these sounds into 12 vocal types. We found that males emitted predominantly short syllables during the first minute of interaction, more long syllables in the later phases, and mainly harmonic sounds during mounting. These context- and time-dependent changes in vocalization indicate that vocal communication during courtship in mice consists of at least three stages and imply that each vocalization type has a specific role in a phase of the courtship sequence. Our findings suggest that recording for a sufficiently long time and taking the phase of courtship into consideration could provide more insights into the role of vocalization in mouse courtship behavior in future study. PMID:26841117
Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.
Padilla-Ortiz, Ana L; Ibarra, David
2018-01-01
Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.
Measurement and calculation of the sound absorption coefficient of pine wood charcoal
NASA Astrophysics Data System (ADS)
Suh, Jae Gap; Baik, Kyung min; Kim, Yong Tae; Jung, Sung Soo
2013-10-01
Although charcoal has been widely utilized for physical therapy and as a deodorant, water purifier, etc. due to its porous features, research on its role as a sound-absorbing material is rarely found. Thus, the sound absorption coefficients of pine wood charcoal were measured using an impedance tube and were compared with the theoretical predictions in the frequency range of 500˜ 5000 Hz. The theory developed in the current study only considers the lowest possible mode propagating along the air channels of the charcoal and shows good agreements with the measurements. As the frequency is increased, the sound absorption coefficients of pine wood charcoals also increase, but are lower than those of other commonly-used sound-absorbing materials.
Linkage of Speech Sound Disorder to Reading Disability Loci
ERIC Educational Resources Information Center
Smith, Shelley D.; Pennington, Bruce F.; Boada, Richard; Shriberg, Lawrence D.
2005-01-01
Background: Speech sound disorder (SSD) is a common childhood disorder characterized by developmentally inappropriate errors in speech production that greatly reduce intelligibility. SSD has been found to be associated with later reading disability (RD), and there is also evidence for both a cognitive and etiological overlap between the two…
Nöstl, Anatole; Marsh, John E; Sörqvist, Patrik
2014-01-01
Participants were requested to respond to a sequence of visual targets while listening to a well-known lullaby. One of the notes in the lullaby was occasionally exchanged with a pattern deviant. Experiment 1 found that deviants capture attention as a function of the pitch difference between the deviant and the replaced/expected tone. However, when the pitch difference between the expected tone and the deviant tone is held constant, a violation to the direction-of-pitch change across tones can also capture attention (Experiment 2). Moreover, in more complex auditory environments, wherein it is difficult to build a coherent neural model of the sound environment from which expectations are formed, deviations can capture attention but it appears to matter less whether this is a violation from a specific stimulus or a violation of the current direction-of-change (Experiment 3). The results support the expectation violation account of auditory distraction and suggest that there are at least two different expectations that can be violated: One appears to be bound to a specific stimulus and the other would seem to be bound to a more global cross-stimulus rule such as the direction-of-change based on a sequence of preceding sound events. Factors like base-rate probability of tones within the sound environment might become the driving mechanism of attentional capture--rather than violated expectations--in complex sound environments.
Tani, Toshiki; Abe, Hiroshi; Hayami, Taku; Banno, Taku; Kitamura, Naohito; Mashiko, Hiromi
2018-01-01
Abstract Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5–16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions. PMID:29736410
Disher, Timothy C; Benoit, Britney; Inglis, Darlene; Burgess, Stacy A; Ellsmere, Barbara; Hewitt, Brenda E; Bishop, Tanya M; Sheppard, Christopher L; Jangaard, Krista A; Morrison, Gavin C; Campbell-Yeo, Marsha L
To identify baseline sound levels, patterns of sound levels, and potential barriers and facilitators to sound level reduction. The study setting was neonatal and pediatric intensive care units in a tertiary care hospital. Participants were staff in both units and parents of currently hospitalized children or infants. One 24-hour sound measurements and one 4-hour sound measurement linked to observed sound events were conducted in each area of the center's neonatal intensive care unit. Two of each measurement type were conducted in the pediatric intensive care unit. Focus groups were conducted with parents and staff. Transcripts were analyzed with descriptive content analysis and themes were compared against results from quantitative measurements. Sound levels exceeded recommended standards at nearly every time point. The most common code was related to talking. Themes from focus groups included the critical care context and sound levels, effects of sound levels, and reducing sound levels-the way forward. Results are consistent with work conducted in other critical care environments. Staff and families realize that high sound levels can be a problem, but feel that the culture and context are not supportive of a quiet care space. High levels of ambient sound suggest that the largest changes in sound levels are likely to come from design and equipment purchase decisions. L10 and Lmax appear to be the best outcomes for measurement of behavioral interventions.
'Noises in the head': a prospective study to characterize intracranial sounds after cranial surgery.
Sivasubramaniam, Vinothan; Alg, Varinder Singh; Frantzias, Joseph; Acharya, Shami Yesha; Papadopoulos, Marios Costa; Martin, Andrew James
2016-08-01
Patients often report sounds in the head after craniotomy. We aim to characterize the prevalence and nature of these sounds, and identify any patient, pathology, or technical factors related to them. These data may be used to inform patients of this sometimes unpleasant, but harmless effect of cranial surgery. Prospective observational study of patients undergoing cranial surgery with dural opening. Eligible patients completed a questionnaire preoperatively and daily after surgery until discharge. Subjects were followed up at 14 days with a telephone consultation. One hundred fifty-one patients with various pathologies were included. Of these, 47 (31 %) reported hearing sounds in their head, lasting an average 4-6 days (median, 4 days, mean, 6 days, range, 1-14 days). The peak onset was the first postoperative day and the most commonly used descriptors were 'clicking' [20/47 (43 %)] and 'fluid moving' in the head [9/47 (19 %)]. A significant proportion (42 %, 32/77) without a wound drain experienced intracranial sounds compared to those with a drain (20 %, 15/74, p < 0.01); there was no difference between suction and gravity drains. Approximately a third of the patients in both groups (post-craniotomy sounds group: 36 %, 17/47; group not reporting sounds: 31 %, 32/104), had postoperative CT scans for unrelated reasons: 73 % (8/11) of those with pneumocephalus experienced intracranial sounds, compared to 24 % (9/38) of those without pneumocephalus (p < 0.01). There was no significant association with craniotomy site or size, temporal bone drilling, bone flap replacement, or filling of the surgical cavity with fluid. Sounds in the head after cranial surgery are common, affecting 31 % of patients. This is the first study into this subject, and provides valuable information useful for consenting patients. The data suggest pneumocephalus as a plausible explanation with which to reassure patients, rather than relying on anecdotal evidence, as has been the case to date.
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues
Liu, Andrew S K; Tsunada, Joji; Gold, Joshua I; Cohen, Yale E
2015-01-01
Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.
Wavelet Packet Entropy for Heart Murmurs Classification
Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Ranga, Sri
2012-01-01
Heart murmurs are the first signs of cardiac valve disorders. Several studies have been conducted in recent years to automatically differentiate normal heart sounds, from heart sounds with murmurs using various types of audio features. Entropy was successfully used as a feature to distinguish different heart sounds. In this paper, new entropy was introduced to analyze heart sounds and the feasibility of using this entropy in classification of five types of heart sounds and murmurs was shown. The entropy was previously introduced to analyze mammograms. Four common murmurs were considered including aortic regurgitation, mitral regurgitation, aortic stenosis, and mitral stenosis. Wavelet packet transform was employed for heart sound analysis, and the entropy was calculated for deriving feature vectors. Five types of classification were performed to evaluate the discriminatory power of the generated features. The best results were achieved by BayesNet with 96.94% accuracy. The promising results substantiate the effectiveness of the proposed wavelet packet entropy for heart sounds classification. PMID:23227043
Replacing the Orchestra? – The Discernibility of Sample Library and Live Orchestra Sounds
Wolf, Anna; Platz, Friedrich; Mons, Jan
2016-01-01
Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons. PMID:27382932
An automatic eye detection and tracking technique for stereo video sequences
NASA Astrophysics Data System (ADS)
Paduru, Anirudh; Charalampidis, Dimitrios; Fouts, Brandon; Jovanovich, Kim
2009-05-01
Human-computer interfacing (HCI) describes a system or process with which two information processors, namely a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer (HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective transfer of information. The solution to the problem is the development of algorithms that allow the computer to understand human intentions based on their facial expressions, head motion patterns, and speech. In this work, we are investigating the feasibility of a stereo system to effectively determine the head position, including the head rotation angles, based on the detection of eye pupils.
Probable reasons for expressed agitation in persons with dementia.
Ragneskog, H; Gerdner, L A; Josefsson, K; Kihlgren, M
1998-05-01
Nursing home patients with dementia were videotaped in three previous studies. Sixty sequences of nine patients exhibiting agitated behaviors were examined to identify the most probable antecedents to agitation. Probable reasons were interpreted and applied to the Progressively Lowered Stress Threshold model, which suggests that agitation is stress related. Analysis suggests that agitation often serves as a form of communication. Two underlying reasons seem to be that the patient had loss of control over the situation and deficient autonomy. The most common causes for expressed agitation were interpreted as discomfort, a wish to be served immediately, conflict between patients or with nursing staff, reactions to environmental noises or sound, and invasion of personal space. It is recommended that nursing staff promote autonomy and independency for this group of patients whenever possible. By evaluating probable reasons for expressed agitation, the nursing staff can take steps to prevent or alleviate agitation.
Inferior Frontal Sensitivity to Common Speech Sounds Is Amplified by Increasing Word Intelligibility
ERIC Educational Resources Information Center
Vaden, Kenneth I., Jr.; Kuchinsky, Stefanie E.; Keren, Noam I.; Harris, Kelly C.; Ahlstrom, Jayne B.; Dubno, Judy R.; Eckert, Mark A.
2011-01-01
The left inferior frontal gyrus (LIFG) exhibits increased responsiveness when people listen to words composed of speech sounds that frequently co-occur in the English language (Vaden, Piquado, & Hickok, 2011), termed high phonotactic frequency (Vitevitch & Luce, 1998). The current experiment aimed to further characterize the relation of…
ERIC Educational Resources Information Center
McLeod, Sharynne; Crowe, Kathryn; Masso, Sarah; Baker, Elise; McCormack, Jane; Wren, Yvonne; Roulstone, Susan; Howland, Charlotte
2017-01-01
Speech sound disorders are a common communication difficulty in preschool children. Teachers indicate difficulty identifying and supporting these children. The aim of this research was to describe speech and language characteristics of children identified by their parents and/or teachers as having possible communication concerns. 275 Australian 4-…
Melodic Priming of Motor Sequence Performance: The Role of the Dorsal Premotor Cortex.
Stephan, Marianne A; Brown, Rachel; Lega, Carlotta; Penhune, Virginia
2016-01-01
The purpose of this study was to determine whether exposure to specific auditory sequences leads to the induction of new motor memories and to investigate the role of the dorsal premotor cortex (dPMC) in this crossmodal learning process. Fifty-two young healthy non-musicians were familiarized with the sound to key-press mapping on a computer keyboard and tested on their baseline motor performance. Each participant received subsequently either continuous theta burst stimulation (cTBS) or sham stimulation over the dPMC and was then asked to remember a 12-note melody without moving. For half of the participants, the contour of the melody memorized was congruent to a subsequently performed, but never practiced, finger movement sequence (Congruent group). For the other half, the melody memorized was incongruent to the subsequent finger movement sequence (Incongruent group). Hearing a congruent melody led to significantly faster performance of a motor sequence immediately thereafter compared to hearing an incongruent melody. In addition, cTBS speeded up motor performance in both groups, possibly by relieving motor consolidation from interference by the declarative melody memorization task. Our findings substantiate recent evidence that exposure to a movement-related tone sequence can induce specific, crossmodal encoding of a movement sequence representation. They further suggest that cTBS over the dPMC may enhance early offline procedural motor skill consolidation in cognitive states where motor consolidation would normally be disturbed by concurrent declarative memory processes. These findings may contribute to a better understanding of auditory-motor system interactions and have implications for the development of new motor rehabilitation approaches using sound and non-invasive brain stimulation as neuromodulatory tools.
ARTICULATION OF SPEECH SOUNDS OF SERBIAN LANGUAGE IN CHILDREN AGED SIX TO EIGHT.
Mihajlović, Biljana; Cvjetićanin, Bojana; Veselinović, Mila; Škrbić, Renata; Mitrović, Slobodan M
2015-01-01
Phonetic and phonological system of the healthy members of one linguistic community is fully formed around 8 yedrs of age. The auditory and articulatory habits are established with age and tend to be more difficult to be upgraded and completed later. The research was done as a cross-sectional study, conducted at the preschool institution "Radosno detinjstvo" and primary school "Branko Radičević" in Novi Sad. It included 66 children of both genders, aged 6 to 8. The quality of articulation was determined according to the Global Articulation Test by working with each child individually. In each individual vowel, plosive, nasal, lateral and fricative, the quality of articulation was statistically significantly better in the first graders compared to the preschool children (p<0.01). In each affricate, except for the sound /ć/, the quality of articulation was statistically significantly better in the first graders than in the preschool children (p<0.01). The quality of articulation of all speech sounds was statistically significantly better in the first graders than in the preschool children (p<0.01). The most common disorder of articulation is distortion, while only substitution and substitution associated with distortion are less common. Omission does not occur in children from 6 to 8 years of age. Girls have slightly better quality of articulation. The articulatory disorders are more common in preschool children than in children who are in the first grade of primary school. The most commonly mispronounced sounds belong to the group of affricates and fricatives.
The influence of crowd density on the sound environment of commercial pedestrian streets.
Meng, Qi; Kang, Jian
2015-04-01
Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.
A Principle for Network Science
2011-02-01
we consider is the sound of splashing water from a leaky faucet . This sequence of water drops can set your teeth on edge and leads to tossing and...intermittent sequence of water drops from a leaky faucet is described by a Lévy stable distribution that is an asymptotically inverse power-law with index...universality of physics: the conservation of energy, symmetry principles, and the laws of thermodynamics have no analogs in the soft sciences. This
ERIC Educational Resources Information Center
Braarud, Hanne Cecilie; Stormark, Kjell Morten
2008-01-01
The purpose of this study was to examine 32 mothers' sensitivity to social contingency during face-to-face interaction with their two- to four-month-old infants in a closed circuit TV set-up. Prosodic qualities and vocal sounds in mother's infant-directed (ID) speech during sequences of live interaction were compared to sequences where expressive…
Characterizing the 3-D atmosphere with NUCAPS sounding products from multiple platforms
NASA Astrophysics Data System (ADS)
Barnet, C. D.; Smith, N.; Gambacorta, A.; Wheeler, A. A.; Sjoberg, W.; Goldberg, M.
2017-12-01
The JPSS Proving Ground and Risk Reduction (PGRR) Program launched the Sounding Initiative in 2014 to develop operational applications that use 3-D satellite soundings. These are near global daily swaths of vertical atmospheric profiles of temperature, moisture and trace gas species. When high vertical resolution satellite soundings first became available, their assimilation into user applications was slow: forecasters familiar with 2-D satellite imagery or 1-D radiosondes did not have the technical capability nor product knowledge to readily ingest satellite soundings. Similarly, the satellite sounding developer community lacked wherewithal to understand the many challenges forecasters face in their real time decision-making. It took the PGRR Sounding Initiative to bring these two communities together and develop novel applications that now depend on NUCAPS soundings. NUCAPS - the NOAA Unique Combined Atmospheric Processing System - is platform agnostic and generates satellite soundings from measurements made by infrared and microwave sounder pairs on the MetOp (IASI/AMSU) and Suomi NPP (CrIS/ATMS) polar-orbiting platforms. We highlight here three new applications developed under the PGRR Sounding Initiative. They are, (i) aviation: NUCAPS identifies cold air "blobs" that causes jet fuel to freeze, (ii) severe weather: NUCAPS identifies areas of convective initiation, and (iii) air quality: NUCAPS identifies stratospheric intrusions and tracks long-range transport of biomass burning plumes. The value of NUCAPS being platform agnostic will become apparent with the JPSS-1 launch. NUCAPS soundings from Suomi NPP and JPSS-1, being 50 min apart, could capture fast-changing weather events and together with NUCAPS soundings from the two MetOp platforms ( 4 hours earlier in the day than JPSS) could characterize diurnal cycles. In this paper, we will summarize key accomplishments and assess whether NUCAPS maintains enough continuity in its sounding products from multiple platforms to sufficiently characterize atmospheric evolution at localized scales. With this we will address one of the primary data requirements that emerged in the Sounding Initiative, namely the need for a time sequence of satellite sounding products.
Engel, Annerose; Bangert, Marc; Horbank, David; Hijmans, Brenda S; Wilkens, Katharina; Keller, Peter E; Keysers, Christian
2012-11-01
To investigate the cross-modal transfer of movement patterns necessary to perform melodies on the piano, 22 non-musicians learned to play short sequences on a piano keyboard by (1) merely listening and replaying (vision of own fingers occluded) or (2) merely observing silent finger movements and replaying (on a silent keyboard). After training, participants recognized with above chance accuracy (1) audio-motor learned sequences upon visual presentation (89±17%), and (2) visuo-motor learned sequences upon auditory presentation (77±22%). The recognition rates for visual presentation significantly exceeded those for auditory presentation (p<.05). fMRI revealed that observing finger movements corresponding to audio-motor trained melodies is associated with stronger activation in the left rolandic operculum than observing untrained sequences. This region was also involved in silent execution of sequences, suggesting that a link to motor representations may play a role in cross-modal transfer from audio-motor training condition to visual recognition. No significant differences in brain activity were found during listening to visuo-motor trained compared to untrained melodies. Cross-modal transfer was stronger from the audio-motor training condition to visual recognition and this is discussed in relation to the fact that non-musicians are familiar with how their finger movements look (motor-to-vision transformation), but not with how they sound on a piano (motor-to-sound transformation). Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fielding, C. R.; Browne, G. H.; Field, B.; Florindo, F.; Harwood, D. M.; Krissek, L. A.; Levy, R. H.; Panter, K.; Passchier, S.; Pekar, S. F.; SMS Science Team
2008-12-01
Present understanding of Antarctic climate change during the Early to Middle Miocene, including definition of major cycles of glacial expansion and contraction, relies in large part on stable isotope proxy records from Ocean Drilling Program cores. Here, we present a sequence stratigraphic analysis of the Southern McMurdo Sound drillcore (AND-2A), which was acquired during the Austral Spring of 2007. This core offers a hitherto unavailable ice-proximal stratigraphic archive of the Early to Middle Miocene from a high-accommodation Antarctic continental margin setting, and provides clear evidence of repeated fluctuations in climate, ice expansion/contraction and attendant sea-level change over the period 20-14 Ma, with a more fragmentary record of the post-14 Ma period. A succession of seventy sequences is recognized, each bounded by a significant facies dislocation (sequence boundary), composed internally of deposits of glacimarine to open shallow marine environments, and each typically dominated by the transgressive systems tract. From changes in facies abundances and sequence character, a series of long-term (m.y.) changes in climate and relative sea-level is identified. The lithostratigraphy can be correlated confidently to glacial events Mi1b and Mi2, to the Miocene Climatic Optimum, and to the global eustatic sea-level curve. SMS provides a detailed, direct, ice-proximal reference point from which to evaluate stable isotope proxy records for Neogene Antarctic paleoclimate.
Learning builds on learning: Infants' use of native language sound patterns to learn words
Graf Estes, Katharine
2014-01-01
The present research investigated how infants apply prior knowledge of environmental regularities to support new learning. The experiments tested whether infants could exploit experience with native language (English) phonotactic patterns to facilitate associating sounds with meanings during word learning. Fourteen-month-olds heard fluent speech that contained cues for detecting target words; they were embedded in sequences that occur across word boundaries. A separate group heard the target words embedded without word boundary cues. Infants then participated in an object label-learning task. With the opportunity to use native language patterns to segment the target words, infants subsequently learned the labels. Without this experience, infants failed. Novice word learners can take advantage of early learning about sounds scaffold lexical development. PMID:24980741
NASA Astrophysics Data System (ADS)
Weissglass, Christine A.
This dissertation investigates transfer and markedness in bilingual and L2 Spanish stop-rhotic sequences (e.g., the 'br' in brisa 'breeze'). It also examines the phonetics-phonology interface in Spanish. To this end, it explores the production of these sequences in two different experiments. Experiment 1 compares the production of these sequences by 6 Spanish monolinguals and 6 Spanish-Basque bilinguals. Experiment 2 does so for 25 L2 learners and 5 native Spanish speakers. Acoustic analysis of these sequences revealed that Spanish-Basque bilinguals produced trills 5% of the time whereas Spanish monolinguals did not have any trills. Additionally, fricative rhotics and coarticulation accounted for 35% of L2 realizations, but were not present in the native Spanish speaker dataset. These findings indicate a role for transfer in both bilingual and L2 phonological acquisition, although it is more prevalent in the L2 learner dataset. This is in line with the Speech Learning Model (Flege, 1995), which posits a stronger role for transfer amongst late learners (i.e., L2 learners) than early learners (i.e., Spanish-Basque bilinguals). In order to examine the role of markedness in bilingual and L2 phonological acquisition, this dissertation investigates the role of sonority in bilingual and L2 Spanish syllable structure. To do so, it proposes a sonority hierarchy for rhotic variants based on their specifications for voicing, intensity and continuancy. According to this hierarchy, approximant rhotics are the most sonorous, followed by taps, trills and fricative rhotics. Therefore, approximant rhotics were expected to be the most common realization followed by taps, trills and fricative rhotics. Although Spanish monolinguals adhered to this expectation, the other groups did not; taps were the most common realization for Spanish-Basque bilinguals, L2 learners, and native Spanish speakers and fricative rhotics were more common than trills for Spanish-Basque bilinguals and L2 learners. These results suggest an interaction between transfer and markedness, consistent with Major (2001). They also reflect dialectal differences in native Spanish speakers. Finally, this dissertation explores the phonetic-phonology interface in Spanish in two ways. First, it investigates the function of svarabhakti vowels, vocalic elements of variable duration that emerge between consonants, in Spanish stop-rhotic sequences. For the most part, the findings support a dissimilatory role for svarabhakti vowels in this context (see also Colantoni & Steele, 2005). Second, in order to examine the impact of gestural timing in Spanish stop-rhotic realization, it considers the role of the sounds surrounding the rhotic (see also Bradley & Schmeiser, 2003). The results can be explained in terms of different degrees of gestural overlap for all groups except L2 learners, which may be due to a strong role of transfer.
Effect of Pile-Driving Sounds on the Survival of Larval Fish.
Bolle, Loes J; de Jong, Christ A F; Bierman, Stijn M; van Beek, Pieter J G; Wessels, Peter W; Blom, Ewout; van Damme, Cindy J G; Winter, Hendrik V; Dekeling, René P A
2016-01-01
Concern exists about the potential effects of pile-driving sounds on fish, but evidence is limited, especially for fish larvae. A device was developed to expose larvae to accurately reproduced pile-driving sounds. Controlled exposure experiments were carried out to examine the lethal effects in common sole larvae. No significant effects were observed at zero-to-peak pressure levels up to 210 dB re 1 μPa(2) and cumulative sound exposure levels up to 206 dB re 1 μPa(2)·s, which is well above the US interim criteria for nonauditory tissue damage in fish. Experiments are presently being carried out for European sea bass and herring larvae.
Horváth, János; Sussman, Elyse; Winkler, István; Schröger, Erich
2011-01-01
Rare irregular sounds (deviants) embedded into a regular sound sequence have large potential to draw attention to themselves (distraction). It has been previously shown that distraction, as manifested by behavioral response delay, and the P3a and reorienting negativity (RON) event-related potentials, could be reduced when the forthcoming deviant was signaled by visual cues preceding the sounds. In the present study, we investigated the type of information used in the prevention of distraction by manipulating the information content of the visual cues preceding the sounds. Cues could signal the specific variant of the forthcoming deviant, or they could just signal that the next tone was a deviant. We found that stimulus-specific cue information was used in reducing distraction. The results also suggest that early P3a and RON index processes related to the specific deviating stimulus feature, whereas late P3a reflects a general distraction-related process. PMID:21310210
Dunlop, Rebecca A; Noad, Michael J; Cato, Douglas H; Stokes, Dale
2007-11-01
Although the songs of humpback whales have been extensively studied, other vocalizations and percussive sounds, referred to as "social sounds," have received little attention. This study presents the social vocalization repertoire of migrating east Australian humpback whales from a sample of 660 sounds recorded from 61 groups of varying composition, over three years. The social vocalization repertoire of humpback whales was much larger than previously described with a total of 34 separate call types classified aurally and by spectrographic analysis as well as statistically. Of these, 21 call types were the same as units of the song current at the time of recording but used individually instead of as part of the song sequence, while the other 13 calls were stable over the three years of the study and were not part of the song. This study provides a catalog of sounds that can be used as a basis for future studies. It is an essential first step in determining the function, contextual use and cultural transmission of humpback social vocalizations.
Evaluation of selective attention in patients with misophonia.
Silva, Fúlvia Eduarda da; Sanchez, Tanit Ganz
2018-03-21
Misophonia is characterized by the aversion to very selective sounds, which evoke a strong emotional reaction. It has been inferred that misophonia, as well as tinnitus, is associated with hyperconnectivity between auditory and limbic systems. Individuals with bothersome tinnitus may have selective attention impairment, but it has not been demonstrated in case of misophonia yet. To characterize a sample of misophonic subjects and compare it with two control groups, one with tinnitus individuals (without misophonia) and the other with asymptomatic individuals (without misophonia and without tinnitus), regarding the selective attention. We evaluated 40 normal-hearing participants: 10 with misophonia, 10 with tinnitus (without misophonia) and 20 without tinnitus and without misophonia. In order to evaluate the selective attention, the dichotic sentence identification test was applied in three situations: firstly, the Brazilian Portuguese test was applied. Then, the same test was applied, combined with two competitive sounds: chewing sound (representing a sound that commonly triggers misophonia), and white noise (representing a common type of tinnitus which causes discomfort to patients). The dichotic sentence identification test with chewing sound, showed that the average of correct responses differed between misophonia and without tinnitus and without misophonia (p=0.027) and between misophonia and tinnitus (without misophonia) (p=0.002), in both cases lower in misophonia. Both, the dichotic sentence identification test alone, and with white noise, failed to show differences in the average of correct responses among the three groups (p≥0.452). The misophonia participants presented a lower percentage of correct responses in the dichotic sentence identification test with chewing sound; suggesting that individuals with misophonia may have selective attention impairment when they are exposed to sounds that trigger this condition. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.
2008-01-01
In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.
NASA Astrophysics Data System (ADS)
Sanderson, Mark I.; Simmons, James A.
2005-11-01
Echolocating big brown bats (Eptesicus fuscus) emit trains of frequency-modulated (FM) biosonar signals whose duration, repetition rate, and sweep structure change systematically during interception of prey. When stimulated with a 2.5-s sequence of 54 FM pulse-echo pairs that mimic sounds received during search, approach, and terminal stages of pursuit, single neurons (N=116) in the bat's inferior colliculus (IC) register the occurrence of a pulse or echo with an average of <1 spike/sound. Individual IC neurons typically respond to only a segment of the search or approach stage of pursuit, with fewer neurons persisting to respond in the terminal stage. Composite peristimulus-time-histogram plots of responses assembled across the whole recorded population of IC neurons depict the delay of echoes and, hence, the existence and distance of the simulated biosonar target, entirely as on-response latencies distributed across time. Correlated changes in pulse duration, repetition rate, and pulse or echo amplitude do modulate the strength of responses (probability of the single spike actually occurring for each sound), but registration of the target itself remains confined exclusively to the latencies of single spikes across cells. Modeling of echo processing in FM biosonar should emphasize spike-time algorithms to explain the content of biosonar images.
Human response to aircraft noise
NASA Technical Reports Server (NTRS)
Powell, Clemans A.; Fields, James M.
1991-01-01
The human auditory system and the perception of sound are discussed. The major concentration is on the annnoyance response and methods for relating the physical characteristics of sound to those psychosociological attributes associated with human response. Results selected from the extensive laboratory and field research conducted on human response to aircraft noise over the past several decades are presented along with discussions of the methodology commonly used in conducting that research. Finally, some of the more common criteria, regulations, and recommended practices for the control or limitation of aircraft noise are examined in light of the research findings on human response.
NASA Astrophysics Data System (ADS)
Ishikawa, K.; Yatabe, K.; Ikeda, Y.; Oikawa, Y.; Onuma, T.; Niwa, H.; Yoshii, M.
2017-02-01
Imaging of sound aids the understanding of the acoustical phenomena such as propagation, reflection, and diffraction, which is strongly required for various acoustical applications. The imaging of sound is commonly done by using a microphone array, whereas optical methods have recently been interested due to its contactless nature. The optical measurement of sound utilizes the phase modulation of light caused by sound. Since light propagated through a sound field changes its phase as proportional to the sound pressure, optical phase measurement technique can be used for the sound measurement. Several methods including laser Doppler vibrometry and Schlieren method have been proposed for that purpose. However, the sensitivities of the methods become lower as a frequency of sound decreases. In contrast, since the sensitivities of the phase-shifting technique do not depend on the frequencies of sounds, that technique is suitable for the imaging of sounds in the low-frequency range. The principle of imaging of sound using parallel phase-shifting interferometry was reported by the authors (K. Ishikawa et al., Optics Express, 2016). The measurement system consists of a high-speed polarization camera made by Photron Ltd., and a polarization interferometer. This paper reviews the principle briefly and demonstrates the high-speed imaging of acoustical phenomena. The results suggest that the proposed system can be applied to various industrial problems in acoustical engineering.
Object representation in the human auditory system
Winkler, István; van Zuijen, Titia L.; Sussman, Elyse; Horváth, János; Näätänen, Risto
2010-01-01
One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision. PMID:16836636
Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina
2011-11-01
Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that are illicit according to the native phonotactic restrictions on sound co-occurrences. The present study with Russian and Canadian English speakers shows that listeners may perceive phonetically distinct and licit sound sequences as equivalent when the native language system provides robust evidence for mapping multiple phonetic forms onto a single phonological representation. In Russian, due to an optional but productive t-deletion process that affects /stn/ clusters, the surface forms [sn] and [stn] may be phonologically equivalent and map to a single phonological form /stn/. In contrast, [sn] and [stn] clusters are usually phonologically distinct in (Canadian) English. Behavioral data from identification and discrimination tasks indicated that [sn] and [stn] clusters were more confusable for Russian than for English speakers. The EEG experiment employed an oddball paradigm with nonwords [asna] and [astna] used as the standard and deviant stimuli. A reliable mismatch negativity response was elicited approximately 100 msec postchange in the English group but not in the Russian group. These findings point to a perceptual repair mechanism that is engaged automatically at a prelexical level to ensure immediate encoding of speech inputs in phonological terms, which in turn enables efficient access to the meaning of a spoken utterance.
Development of an ICT-Based Air Column Resonance Learning Media
NASA Astrophysics Data System (ADS)
Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut
2016-08-01
Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.
Sound: An Element Common to Communication of Stingless Bees and to Dances of the Honey Bee.
Esch, H; Esch, I; Kerr, W E
1965-07-16
Sounds are an important part of the communication behavior, the so-called dances, of the honey bee. Stingless bees, which do not use dances for communication, use sound signals to indicate the existence and, in some cases, the distance of a feeding place. The social organization of communities of stingless bees is more primitive than that of honey bees, yet certain commonfeatures of communication behavior in these two groups lead to a new hypothesis of the evolution of dancing behavior of the honey bee.
Audio direct broadcast satellites
NASA Technical Reports Server (NTRS)
Miller, J. E.
1983-01-01
Satellite sound broadcasting is, as the name implies, the use of satellite techniques and technology to broadcast directly from space to low-cost, consumer-quality receivers the types of sound programs commonly received in the AM and FM broadcast bands. It would be a ubiquitous service available to the general public in the home, in the car, and out in the open.
Acoustics in Research Facilities--Control of Wanted and Unwanted Sound. Laboratory Design Notes.
ERIC Educational Resources Information Center
Newman, Robert B.
Common and special acoustics problems are discussed in relation to the design and construction of research facilities. Following a brief examination of design criteria for the control of wanted and unwanted sound, the technology for achieving desired results is discussed. Emphasis is given to various design procedures and materials for the control…
Inexpensive Instruments for a Sound Unit
ERIC Educational Resources Information Center
Brazzle, Bob
2011-01-01
My unit on sound and waves is embedded within a long-term project in which my high school students construct a musical instrument out of common materials. The unit culminates with a performance assessment: students play the first four measures of "Somewhere Over the Rainbow"--chosen because of the octave interval of the first two notes--in the key…
A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing
NASA Astrophysics Data System (ADS)
Cobos, Maximo; Lopez, JoseJ; Spors, Sascha
2010-12-01
Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
Port, Jesse A; Wallace, James C; Griffith, William C; Faustman, Elaine M
2012-01-01
Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial public health monitoring as well as more targeted and functionally-based investigations.
Corollary discharge provides the sensory content of inner speech.
Scott, Mark
2013-09-01
Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one's external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one's voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.
An automated computerized auscultation and diagnostic system for pulmonary diseases.
Abbas, Ali; Fahim, Atef
2010-12-01
Respiratory sounds are of significance as they provide valuable information on the health of the respiratory system. Sounds emanating from the respiratory system are uneven, and vary significantly from one individual to another and for the same individual over time. In and of themselves they are not a direct proof of an ailment, but rather an inference that one exists. Auscultation diagnosis is an art/skill that is acquired and honed by practice; hence it is common to seek confirmation using invasive and potentially harmful imaging diagnosis techniques like X-rays. This research focuses on developing an automated auscultation diagnostic system that overcomes the limitations inherent in traditional auscultation techniques. The system uses a front end sound signal filtering module that uses adaptive Neural Networks (NN) noise cancellation to eliminate spurious sound signals like those from the heart, intestine, and ambient noise. To date, the core diagnosis module is capable of identifying lung sounds from non-lung sounds, normal lung sounds from abnormal ones, and identifying wheezes from crackles as indicators of different ailments.
Newborn infants detect cues of concurrent sound segregation.
Bendixen, Alexandra; Háden, Gábor P; Németh, Renáta; Farkas, Dávid; Török, Miklós; Winkler, István
2015-01-01
Separating concurrent sounds is fundamental for a veridical perception of one's auditory surroundings. Sound components that are harmonically related and start at the same time are usually grouped into a common perceptual object, whereas components that are not in harmonic relation or have different onset times are more likely to be perceived in terms of separate objects. Here we tested whether neonates are able to pick up the cues supporting this sound organization principle. We presented newborn infants with a series of complex tones with their harmonics in tune (creating the percept of a unitary sound object) and with manipulated variants, which gave the impression of two concurrently active sound sources. The manipulated variant had either one mistuned partial (single-cue condition) or the onset of this mistuned partial was also delayed (double-cue condition). Tuned and manipulated sounds were presented in random order with equal probabilities. Recording the neonates' electroencephalographic responses allowed us to evaluate their processing of the sounds. Results show that, in both conditions, mistuned sounds elicited a negative displacement of the event-related potential (ERP) relative to tuned sounds from 360 to 400 ms after sound onset. The mistuning-related ERP component resembles the object-related negativity (ORN) component in adults, which is associated with concurrent sound segregation. Delayed onset additionally led to a negative displacement from 160 to 200 ms, which was probably more related to the physical parameters of the sounds than to their perceptual segregation. The elicitation of an ORN-like response in newborn infants suggests that neonates possess the basic capabilities of segregating concurrent sounds by detecting inharmonic relations between the co-occurring sounds. © 2015 S. Karger AG, Basel.
Pitch discrimination by ferrets for simple and complex sounds.
Walker, Kerry M M; Schnupp, Jan W H; Hart-Schnupp, Sheelah M B; King, Andrew J; Bizley, Jennifer K
2009-09-01
Although many studies have examined the performance of animals in detecting a frequency change in a sequence of tones, few have measured animals' discrimination of the fundamental frequency (F0) of complex, naturalistic stimuli. Additionally, it is not yet clear if animals perceive the pitch of complex sounds along a continuous, low-to-high scale. Here, four ferrets (Mustela putorius) were trained on a two-alternative forced choice task to discriminate sounds that were higher or lower in F0 than a reference sound using pure tones and artificial vowels as stimuli. Average Weber fractions for ferrets on this task varied from approximately 20% to 80% across references (200-1200 Hz), and these fractions were similar for pure tones and vowels. These thresholds are approximately ten times higher than those typically reported for other mammals on frequency change detection tasks that use go/no-go designs. Naive human listeners outperformed ferrets on the present task, but they showed similar effects of stimulus type and reference F0. These results suggest that while non-human animals can be trained to label complex sounds as high or low in pitch, this task may be much more difficult for animals than simply detecting a frequency change.
Baleen whale infrasonic sounds: Natural variability and function
NASA Astrophysics Data System (ADS)
Clark, Christopher W.
2004-05-01
Blue and fin whales (Balaenoptera musculus and B. physalus) produce very intense, long, patterned sequences of infrasonic sounds. The acoustic characteristics of these sounds suggest strong selection for signals optimized for very long-range propagation in the deep ocean as first hypothesized by Payne and Webb in 1971. This hypothesis has been partially validated by very long-range detections using hydrophone arrays in deep water. Humpback songs recorded in deep water contain units in the 20-l00 Hz range, and these relatively simple song components are detectable out to many hundreds of miles. The mid-winter peak in the occurrence of 20-Hz fin whale sounds led Watkins to hypothesize a reproductive function similar to humpback (Megaptera novaeangliae) song, and by default this function has been extended to blue whale songs. More recent evidence shows that blue and fin whales produce infrasonic calls in high latitudes during the feeding season, and that singing is associated with areas of high productivity where females congregate to feed. Acoustic sampling over broad spatial and temporal scales for baleen species is revealing higher geographic and seasonal variability in the low-frequency vocal behaviors than previously reported, suggesting that present explanations for baleen whale sounds are too simplistic.
Perception of binary acoustic events associated with the first heart sound
NASA Technical Reports Server (NTRS)
Spodick, D. H.
1977-01-01
The resolving power of the auditory apparatus permits discrete vibrations associated with cardiac activity to be perceived as one or more events. Irrespective of the vibratory combinations recorded by conventional phonocardiography, in normal adults and in most adult patients auscultators tend to discriminate only two discrete events associated with the first heart sound S1. It is stressed that the heart sound S4 may be present when a binary acoustic event associated with S1 occurs in the sequence 'low pitched sound preceding high pitched sound', i.e., its components are perceived by auscultation as 'dull-sharp'. The question of S4 audibility arises in those individuals, normal and diseased, in whom the major components of S1 ought to be, at least clinically, at their customary high pitch and indeed on the PCG appear as high frequency oscillations. It is revealed that the apparent audibility of recorded S4 is not related to P-R interval, P-S4 interval, or relative amplitude of S4. The significant S4-LFC (low frequency component of S1) differences can be related to acoustic modification of the early component of S1.
Visualization of Heart Sounds and Motion Using Multichannel Sensor
NASA Astrophysics Data System (ADS)
Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko
2010-06-01
As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.
Low-frequency sound affects active micromechanics in the human inner ear
Kugler, Kathrin; Wiegrebe, Lutz; Grothe, Benedikt; Kössl, Manfred; Gürkov, Robert; Krause, Eike; Drexl, Markus
2014-01-01
Noise-induced hearing loss is one of the most common auditory pathologies, resulting from overstimulation of the human cochlea, an exquisitely sensitive micromechanical device. At very low frequencies (less than 250 Hz), however, the sensitivity of human hearing, and therefore the perceived loudness is poor. The perceived loudness is mediated by the inner hair cells of the cochlea which are driven very inadequately at low frequencies. To assess the impact of low-frequency (LF) sound, we exploited a by-product of the active amplification of sound outer hair cells (OHCs) perform, so-called spontaneous otoacoustic emissions. These are faint sounds produced by the inner ear that can be used to detect changes of cochlear physiology. We show that a short exposure to perceptually unobtrusive, LF sounds significantly affects OHCs: a 90 s, 80 dB(A) LF sound induced slow, concordant and positively correlated frequency and level oscillations of spontaneous otoacoustic emissions that lasted for about 2 min after LF sound offset. LF sounds, contrary to their unobtrusive perception, strongly stimulate the human cochlea and affect amplification processes in the most sensitive and important frequency range of human hearing. PMID:26064536
Clicks, whistles and pulses: Passive and active signal use in dolphin communication
NASA Astrophysics Data System (ADS)
Herzing, Denise L.
2014-12-01
The search for signals out of noise is a problem not only with radio signals from the sky but in the study of animal communication. Dolphins use multiple modalities to communicate including body postures, touch, vision, and most elaborately sound. Like SETI radio signal searches, dolphin sound analysis includes the detection, recognition, analysis, and interpretation of signals. Dolphins use both passive listening and active production to communicate. Dolphins use three main types of acoustic signals: frequency modulated whistles (narrowband with harmonics), echolocation (broadband clicks) and burst pulsed sounds (packets of closely spaced broadband clicks). Dolphin sound analysis has focused on frequency-modulated whistles, yet the most commonly used signals are burst-pulsed sounds which, due to their graded and overlapping nature and bimodal inter-click interval (ICI) rates are hard to categorize. We will look at: 1) the mechanism of sound production and categories of sound types, 2) sound analysis techniques and information content, and 3) examples of lessons learned in the study of dolphin acoustics. The goal of this paper is to provide perspective on how animal communication studies might provide insight to both passive and active SETI in the larger context of searching for life signatures.
Golden, Hannah L; Downey, Laura E; Fletcher, Philip D; Mahoney, Colin J; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D
2015-05-15
Recognition of nonverbal sounds in semantic dementia and other syndromes of anterior temporal lobe degeneration may determine clinical symptoms and help to define phenotypic profiles. However, nonverbal auditory semantic function has not been widely studied in these syndromes. Here we investigated semantic processing in two key nonverbal auditory domains - environmental sounds and melodies - in patients with semantic dementia (SD group; n=9) and in patients with anterior temporal lobe atrophy presenting with behavioural decline (TL group; n=7, including four cases with MAPT mutations) in relation to healthy older controls (n=20). We assessed auditory semantic performance in each domain using novel, uniform within-modality neuropsychological procedures that determined sound identification based on semantic classification of sound pairs. Both the SD and TL groups showed comparable overall impairments of environmental sound and melody identification; individual patients generally showed superior identification of environmental sounds than melodies, however relative sparing of melody over environmental sound identification also occurred in both groups. Our findings suggest that nonverbal auditory semantic impairment is a common feature of neurodegenerative syndromes with anterior temporal lobe atrophy. However, the profile of auditory domain involvement varies substantially between individuals. Copyright © 2015. Published by Elsevier B.V.
Human emotions track changes in the acoustic environment.
Ma, Weiyi; Thompson, William Forde
2015-11-24
Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin's hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds.
Zhu, Yifan; Hu, Jie; Fan, Xudong; Yang, Jing; Liang, Bin; Zhu, Xuefeng; Cheng, Jianchun
2018-04-24
The fine manipulation of sound fields is critical in acoustics yet is restricted by the coupled amplitude and phase modulations in existing wave-steering metamaterials. Commonly, unavoidable losses make it difficult to control coupling, thereby limiting device performance. Here we show the possibility of tailoring the loss in metamaterials to realize fine control of sound in three-dimensional (3D) space. Quantitative studies on the parameter dependence of reflection amplitude and phase identify quasi-decoupled points in the structural parameter space, allowing arbitrary amplitude-phase combinations for reflected sound. We further demonstrate the significance of our approach for sound manipulation by producing self-bending beams, multifocal focusing, and a single-plane two-dimensional hologram, as well as a multi-plane 3D hologram with quality better than the previous phase-controlled approach. Our work provides a route for harnessing sound via engineering the loss, enabling promising device applications in acoustics and related fields.
Children's discrimination of vowel sequences
NASA Astrophysics Data System (ADS)
Coady, Jeffry A.; Kluender, Keith R.; Evans, Julia
2003-10-01
Children's ability to discriminate sequences of steady-state vowels was investigated. Vowels (as in ``beet,'' ``bat,'' ``bought,'' and ``boot'') were synthesized at durations of 40, 80, 160, 320, 640, and 1280 ms. Four different vowel sequences were created by concatenating different orders of vowels for each duration, separated by 10-ms intervening silence. Thus, sequences differed in vowel order and duration (rate). Sequences were 12 s in duration, with amplitude ramped linearly over the first and last 2 s. Sequence pairs included both same (identical sequences) and different trials (sequences with vowels in different orders). Sequences with vowel of equal duration were presented on individual trials. Children aged 7;0 to 10;6 listened to pairs of sequences (with 100 ms between sequences) and responded whether sequences sounded the same or different. Results indicate that children are best able to discriminate sequences of intermediate-duration vowels, typical of conversational speaking rate. Children were less accurate with both shorter and longer vowels. Results are discussed in terms of auditory processing (shortest vowels) and memory (longest vowels). [Research supported by NIDCD DC-05263, DC-04072, and DC-005650.
The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes
Gygi, Brian; Shafiro, Valeriy
2011-01-01
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about 5 percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naïve (untrained) listeners showed that this Incongruency Advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of −7.5 dB, but there is about 5 percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the Incongruency Advantage is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. PMID:21355664
New non-invasive automatic cough counting program based on 6 types of classified cough sounds.
Murata, Akira; Ohota, Nao; Shibuya, Atsuo; Ono, Hiroshi; Kudoh, Shoji
2006-01-01
Cough consisting of an initial deep inspiration, glottal closure, and an explosive expiration accompanied by a sound is one of the most common symptoms of respiratory disease. Despite its clinical importance, standard methods for objective cough analysis have yet to be established. We investigated the characteristics of cough sounds acoustically, designed a program to discriminate cough sounds from other sounds, and finally developed a new objective method of non-invasive cough counting. In addition, we evaluated the clinical efficacy of that program. We recorded cough sounds using a memory stick IC recorder in free-field from 2 patients and analyzed the intensity of 534 recorded coughs acoustically according to time domain. First we squared the sound waveform of recorded cough sounds, which was then smoothed out over a 20 ms window. The 5 parameters and some definitions to discriminate the cough sounds from other noise were identified and the cough sounds were classified into 6 groups. Next, we applied this method to develop a new automatic cough count program. Finally, to evaluate the accuracy and clinical usefulness of this program, we counted cough sounds collected from another 10 patients using our program and conventional manual counting. And the sensitivity, specificity and discriminative rate of the program were analyzed. This program successfully discriminated recorded cough sounds out of 1902 sound events collected from 10 patients at a rate of 93.1%. The sensitivity was 90.2% and the specificity was 96.5%. Our new cough counting program can be sufficiently useful for clinical studies.
Slack, John F.; Shanks, Wayne C.; Karl, Susan M.; Gemery, Pamela A.; Bittenbender, Peter E.; Ridley, W. Ian
2007-01-01
Stratabound volcanogenic massive sulfide (VMS) deposits on Prince of Wales Island and vicinity, southeastern Alaska, occur in two volcanosedimentary sequences of Late Proterozoic through Cambrian and of Ordovician through Early Silurian age. This study presents geochemical data on sulfide-rich samples, in situ laser-ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) of sulfide minerals, and sulfur-isotopic analyses of sulfides and sulfates (barite) for identifying and distinguishing between primary sea-floor signatures and later regional metamorphic overprints. These datasets are also used here in an attempt to discriminate the VMS deposits in the older Wales Group from those in the younger Moira Sound unit (new informal name). The Wales Group and its contained VMS deposits have been multiply deformed and metamorphosed from greenschist to amphibolite grade, whereas the Moira Sound unit and related VMS deposits are less deformed and generally less metamorphosed (lower to middle greenschist grade). Variations in the sulfide mineral assemblages and textures of the VMS deposits in both sequences reflect a combination of processes, including primary sea-floor mineralization and sub-sea-floor zone refining, followed by metamorphic recrystallization. Very coarse grained (>1 cm diam) sulfide minerals and abundant pyrrhotite are restricted to VMS deposits in a small area of the Wales Group, at Khayyam and Stumble-On, which record high-grade metamorphism of the sulfides. Geochemical and sulfur-isotopic data distinguish the VMS deposits in the Wales Group from those in the Moira Sound unit. Although base- and precious-metal contents vary widely in sulfide-rich samples from both sequences, samples from the Moira Sound generally have proportionately higher Ag contents relative to base metals and Au. In situ LA-ICP-MS analysis of trace elements in the sulfide minerals suggests that primary sea-floor hydrothermal signatures are preserved in some samples (for example, Mn, As, Sb, and Tl in pyrite from the Moira Sound unit), whereas in other samples the signatures are varyingly annealed, owing to metamorphic overprinting. A limited LA-ICP-MS database for sphalerite indicates that low-Fe sphalerite is preferentially associated with the most Au rich deposits, the Niblack and Nutkwa. Sulfur-isotopic values for sulfide minerals in the VMS deposits in the Wales Group range from 5.9 to 17.4 permil (avg 11.5?2.7 permil), about 5 to 6 permil higher than those in the Moira Sound unit, which range from -2.8 to 10.4 permil (avg 6.1?4.0 permil). This difference in 34Ssulfide values reflects a dominantly seawater sulfate source of the sulfides and is linked to the 34S values of contemporaneous seawater sulfate, which were slightly higher during the Late Proterozoic through Cambrian than during the Ordovician through Early Silurian.
Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields
Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.
2016-01-01
Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599
The acoustic performance of double-skin facades: A design support tool for architects
NASA Astrophysics Data System (ADS)
Batungbakal, Aireen
This study assesses and validates the influence of measuring sound in the urban environment and the influence of glass facade components in reducing sound transmission to the indoor environment. Among the most reported issues affecting workspaces, increased awareness to minimize noise led building designers to reconsider the design of building envelopes and its site environment. Outdoor sound conditions, such as traffic noise, challenge designers to accurately estimate the capability of glass facades in acquiring an appropriate indoor sound quality. Indicating the density of the urban environment, field-tests acquired existing sound levels in areas of high commercial development, employment, and traffic activity, establishing a baseline for sound levels common in urban work areas. Composed from the direct sound transmission loss of glass facades simulated through INSUL, a sound insulation software, data is utilized as an informative tool correlating the response of glass facade components towards existing outdoor sound levels of a project site in order to achieve desired indoor sound levels. This study progresses to link the disconnection in validating the acoustic performance of glass facades early in a project's design, from conditioned settings such as field-testing and simulations to project completion. Results obtained from the study's facade simulations and facade comparison supports that acoustic comfort is not limited to a singular solution, but multiple design options responsive to its environment.
Human Action Recognition Using Wireless Wearable In-Ear Microphone
NASA Astrophysics Data System (ADS)
Nishimura, Jun; Kuroda, Tadahiro
To realize the ubiquitous eating habits monitoring, we proposed the use of sounds sensed by an in-ear placed wireless wearable microphone. A prototype of wireless wearable in-ear microphone was developed by utilizing a common Bluetooth headset. We proposed a robust chewing action recognition algorithm which consists of two recognition stages: “chew-like” signal detection and chewing sound verification stages. We also provide empirical results on other action recognition using in-ear sound including swallowing, cough, belch, and etc. The average chewing number counting error rate of 1.93% is achieved. Lastly, chewing sound mapping is proposed as a new prototypical approach to provide an additional intuitive feedback on food groups to be able to infer the eating habits in their daily life context.
Results of the Sensory Profile in Children with Suspected Childhood Apraxia of Speech
ERIC Educational Resources Information Center
Newmeyer Amy J.; Grether, Sandra; Aylward, Christa; deGrauw, Ton; Akers, Rachel; Grasha, Carol; Ishikawa, Keiko; White, Jaye
2009-01-01
Speech-sound disorders are common in preschool-age children, and are characterized by difficulty in the planning and production of speech sounds and their combination into words and sentences. The objective of this study was to review and compare the results of the "Sensory Profile" ([Dunn, 1999]) in children with a specific type of speech-sound…
Perception and Confusion of Speech Sounds by Adults with a Cochlear Implant
ERIC Educational Resources Information Center
Rodvik, Arne K.
2008-01-01
The aim of this pilot study was to identify the most common speech sound confusions of 5 Norwegian cochlear implanted post-lingually deafened adults. We played recorded nonwords, aCa, iCi and bVb, to our informants, asked them to repeat what they heard, recorded their repetitions and transcribed these phonetically. We arranged the collected data…
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
2016-01-01
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959
Brief report: sound output of infant humidifiers.
Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T
2015-06-01
The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Sensorineural Tinnitus: Its Pathology and Probable Therapies
Møller, Aage R.
2016-01-01
Tinnitus is not a single disease but a group of different diseases with different pathologies and therefore different treatments. Regarding tinnitus as a single disease is hampering progress in understanding of the pathophysiology of tinnitus and perhaps, more importantly, it is a serious obstacle in development of effective treatments for tinnitus. Subjective tinnitus is a phantom sound that takes many different forms and has similarities with chronic neuropathic pain. The pathology may be in the cochlea, in the auditory nerve, or, most commonly, in the brain. Like chronic neuropathic pain tinnitus is not life threatening but influences many normal functions such as sleep and the ability to concentrate on work. Some forms of chronic tinnitus have two components, a (phantom) sound and a component that may best be described as suffering or distress. The pathology of these two components may be different and the treatment that is most effective may be different for these two components. The most common form of treatment of tinnitus is pharmacological agents and behavioral treatment combined with sound therapy. Less common treatments are hypnosis and acupuncture. Various forms of neuromodulation are becoming in use in an attempt to reverse maladaptive plastic changes in the brain. PMID:26977153
Reinprecht, Yarmilla; Yadegari, Zeinab; Perry, Gregory E.; Siddiqua, Mahbuba; Wright, Lori C.; McClean, Phillip E.; Pauls, K. Peter
2013-01-01
Legumes contain a variety of phytochemicals derived from the phenylpropanoid pathway that have important effects on human health as well as seed coat color, plant disease resistance and nodulation. However, the information about the genes involved in this important pathway is fragmentary in common bean (Phaseolus vulgaris L.). The objectives of this research were to isolate genes that function in and control the phenylpropanoid pathway in common bean, determine their genomic locations in silico in common bean and soybean, and analyze sequences of the 4CL gene family in two common bean genotypes. Sequences of phenylpropanoid pathway genes available for common bean or other plant species were aligned, and the conserved regions were used to design sequence-specific primers. The PCR products were cloned and sequenced and the gene sequences along with common bean gene-based (g) markers were BLASTed against the Glycine max v.1.0 genome and the P. vulgaris v.1.0 (Andean) early release genome. In addition, gene sequences were BLASTed against the OAC Rex (Mesoamerican) genome sequence assembly. In total, fragments of 46 structural and regulatory phenylpropanoid pathway genes were characterized in this way and placed in silico on common bean and soybean sequence maps. The maps contain over 250 common bean g and SSR (simple sequence repeat) markers and identify the positions of more than 60 additional phenylpropanoid pathway gene sequences, plus the putative locations of seed coat color genes. The majority of cloned phenylpropanoid pathway gene sequences were mapped to one location in the common bean genome but had two positions in soybean. The comparison of the genomic maps confirmed previous studies, which show that common bean and soybean share genomic regions, including those containing phenylpropanoid pathway gene sequences, with conserved synteny. Indels identified in the comparison of Andean and Mesoamerican common bean 4CL gene sequences might be used to develop inter-pool phenylpropanoid pathway gene-based markers. We anticipate that the information obtained by this study will simplify and accelerate selections of common bean with specific phenylpropanoid pathway alleles to increase the contents of beneficial phenylpropanoids in common bean and other legumes. PMID:24046770
Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.
Cummings, W C; Holliday, D V
1987-09-01
Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.
A Perceptuo-Cognitive-Motor Approach to the Special Child.
ERIC Educational Resources Information Center
Kornblum, Rena Beth
A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-06-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Topographic EEG activations during timbre and pitch discrimination tasks using musical sounds.
Auzou, P; Eustache, F; Etevenon, P; Platel, H; Rioux, P; Lambert, J; Lechevalier, B; Zarifian, E; Baron, J C
1995-01-01
Successive auditory stimulation sequences were presented binaurally to 18 young normal volunteers. Five conditions were investigated: two reference tasks, assumed to involve passive listening to couples of musical sounds, and three discrimination tasks, one dealing with pitch, and two with timbre (either with or without the attack). A symmetrical montage of 16 EEG channels was recorded for each subject across the different conditions. Two quantitative parameters of EEG activity were compared among the different sequences within five distinct frequency bands. As compared to a rest (no stimulation) condition, both passive listening conditions led to changes in primary auditory cortex areas. Both discrimination tasks for pitch and timbre led to right hemisphere EEG changes, organized in two poles: an anterior one and a posterior one. After discussing the electrophysiological aspects of this work, these results are interpreted in terms of a network including the right temporal neocortex and the right frontal lobe to maintain the acoustical information in an auditory working memory necessary to carry out the discrimination task.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-01-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues1,2,3
Tsunada, Joji
2015-01-01
Abstract Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects’ speed−accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence. PMID:26464975
Peter, Beate; Raskind, Wendy H.
2011-01-01
Purpose To evaluate phenotypic expressions of speech sound disorder (SSD) in multigenerational families with evidence of familial forms of SSD. Method Members of five multigenerational families (N = 36) produced rapid sequences of monosyllables and disyllables and tapped computer keys with repetitive and alternating movements. Results Measures of repetitive and alternating motor speed were correlated within and between the two motor systems. Repetitive and alternating motor speeds increased in children and decreased in adults as a function of age. In two families with children who had severe speech deficits consistent with disrupted praxis, slowed alternating, but not repetitive, oral movements characterized most of the affected children and adults with a history of SSD, and slowed alternating hand movements were seen in some of the biologically related participants as well. Conclusion Results are consistent with a familial motor-based SSD subtype with incomplete penetrance, motivating new clinical questions about motor-based intervention not only in the oral but also the limb system. PMID:21909176
ERIC Educational Resources Information Center
Wren, Yvonne; Miller, Laura L.; Peters, Tim J.; Emond, Alan; Roulstone, Sue
2016-01-01
Purpose: The purpose of this study was to determine prevalence and predictors of persistent speech sound disorder (SSD) in children aged 8 years after disregarding children presenting solely with common clinical distortions (i.e., residual errors). Method: Data from the Avon Longitudinal Study of Parents and Children (Boyd et al., 2012) were used.…
Yang, Ming; De Coensel, Bert; Kang, Jian
2015-08-01
1/f noise or pink noise, which has been shown to be universal in nature, has also been observed in the temporal envelope of music, speech, and environmental sound. Moreover, the slope of the spectral density of the temporal envelope of music has been shown to correlate well to its pleasing, dull, or chaotic character. In this paper, the temporal structure of a number of instantaneous psychoacoustic parameters of environmental sound is examined in order to investigate whether a 1/f temporal structure appears in various types of sound that are generally preferred by people in everyday life. The results show, to some extent, that different categories of environmental sounds have different temporal structure characteristics. Only a number of urban sounds considered and birdsong, generally, exhibit 1/f behavior on short to medium duration time scales, i.e., from 0.1 s to 10 s, in instantaneous loudness and sharpness, whereas a more chaotic variation is found in birdsong at longer time scales, i.e., of 10 s-200 s. The other sound categories considered exhibit random or monotonic variations in the different time scales. In general, this study shows that a 1/f temporal structure is not necessarily present in environmental sounds that are commonly perceived as pleasant.
Pneumothorax effects on pulmonary acoustic transmission
Balk, Robert A.; Warren, William H.; Royston, Thomas J.; Dai, Zoujun; Peng, Ying; Sandler, Richard H.
2015-01-01
Pneumothorax (PTX) is an abnormal accumulation of air between the lung and the chest wall. It is a relatively common and potentially life-threatening condition encountered in patients who are critically ill or have experienced trauma. Auscultatory signs of PTX include decreased breath sounds during the physical examination. The objective of this exploratory study was to investigate the changes in sound transmission in the thorax due to PTX in humans. Nineteen human subjects who underwent video-assisted thoracic surgery, during which lung collapse is a normal part of the surgery, participated in the study. After subjects were intubated and mechanically ventilated, sounds were introduced into their airways via an endotracheal tube. Sounds were then measured over the chest surface before and after lung collapse. PTX caused small changes in acoustic transmission for frequencies below 400 Hz. A larger decrease in sound transmission was observed from 400 to 600 Hz, possibly due to the stronger acoustic transmission blocking of the pleural air. At frequencies above 1 kHz, the sound waves became weaker and so did their changes with PTX. The study elucidated some of the possible mechanisms of sound propagation changes with PTX. Sound transmission measurement was able to distinguish between baseline and PTX states in this small patient group. Future studies are needed to evaluate this technique in a wider population. PMID:26023225
Photoacoustic sounds from meteors
Spalding, Richard; Tencer, John; Sweatt, William; ...
2017-02-01
Concurrent sound associated with very bright meteors manifests as popping, hissing, and faint rustling sounds occurring simultaneously with the arrival of light from meteors. Numerous instances have been documented with –11 to –13 brightness. These sounds cannot be attributed to direct acoustic propagation from the upper atmosphere for which travel time would be several minutes. Concurrent sounds must be associated with some form of electromagnetic energy generated by the meteor, propagated to the vicinity of the observer, and transduced into acoustic waves. Previously, energy propagated from meteors was assumed to be RF emissions. This has not been well validated experimentally.more » Herein we describe experimental results and numerical models in support of photoacoustic coupling as the mechanism. Recent photometric measurements of fireballs reveal strong millisecond flares and significant brightness oscillations at frequencies ≥40 Hz. Strongly modulated light at these frequencies with sufficient intensity can create concurrent sounds through radiative heating of common dielectric materials like hair, clothing, and leaves. This heating produces small pressure oscillations in the air contacting the absorbers. Calculations show that –12 brightness meteors can generate audible sound at ~25 dB SPL. As a result, the photoacoustic hypothesis provides an alternative explanation for this longstanding mystery about generation of concurrent sounds by fireballs.« less
Musical expertise and foreign speech perception
Martínez-Montes, Eduardo; Hernández-Pérez, Heivet; Chobert, Julie; Morgado-Rodríguez, Lisbet; Suárez-Murias, Carlos; Valdés-Sosa, Pedro A.; Besson, Mireille
2013-01-01
The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN) design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT) or equivalent that were either far from (Large deviants) or close to (Small deviants) the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception are discussed. PMID:24294193
Kocsis, Zsuzsanna; Winkler, István; Szalárdy, Orsolya; Bendixen, Alexandra
2014-07-01
In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: one or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects a combined, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound. Copyright © 2014 Elsevier B.V. All rights reserved.
Musical expertise and foreign speech perception.
Martínez-Montes, Eduardo; Hernández-Pérez, Heivet; Chobert, Julie; Morgado-Rodríguez, Lisbet; Suárez-Murias, Carlos; Valdés-Sosa, Pedro A; Besson, Mireille
2013-01-01
The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN) design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT) or equivalent that were either far from (Large deviants) or close to (Small deviants) the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception are discussed.
Interspecific variation of warning calls in piranhas: a comparative analysis.
Mélotte, Geoffrey; Vigouroux, Régis; Michel, Christian; Parmentier, Eric
2016-10-26
Fish sounds are known to be species-specific, possessing unique temporal and spectral features. We have recorded and compared sounds in eight piranha species to evaluate the potential role of acoustic communication as a driving force in clade diversification. All piranha species showed the same kind of sound-producing mechanism: sonic muscles originate on vertebrae and attach to a tendon surrounding the bladder ventrally. Contractions of the sound-producing muscles force swimbladder vibration and dictate the fundamental frequency. It results the calling features of the eight piranha species logically share many common characteristics. In all the species, the calls are harmonic sounds composed of multiple continuous cycles. However, the sounds of Serrasalmus elongatus (higher number of cycles and high fundamental frequency) and S. manueli (long cycle periods and low fundamental frequency) are clearly distinguishable from the other species. The sonic mechanism being largely conserved throughout piranha evolution, acoustic communication can hardly be considered as the main driving force in the diversification process. However, sounds of some species are clearly distinguishable despite the short space for variations supporting the need for specific communication. Behavioural studies are needed to clearly understand the eventual role of the calls during spawning events.
Air-borne and tissue-borne sensitivities of bioacoustic sensors used on the skin surface.
Zañartu, Matías; Ho, Julio C; Kraman, Steve S; Pasterkamp, Hans; Huber, Jessica E; Wodicka, George R
2009-02-01
Measurements of body sounds on the skin surface have been widely used in the medical field and continue to be a topic of current research, ranging from the diagnosis of respiratory and cardiovascular diseases to the monitoring of voice dosimetry. These measurements are typically made using light-weight accelerometers and/or air-coupled microphones attached to the skin. Although normally neglected, air-borne sounds generated by the subject or other sources of background noise can easily corrupt such recordings, which is particularly critical in the recording of voiced sounds on the skin surface. In this study, the sensitivity of commonly used bioacoustic sensors to air-borne sounds was evaluated and compared with their sensitivity to tissue-borne body sounds. To delineate the sensitivity to each pathway, the sensors were first tested in vitro and then on human subjects. The results indicated that, in general, the air-borne sensitivity is sufficiently high to significantly corrupt body sound signals. In addition, the air-borne and tissue-borne sensitivities can be used to discriminate between these components. Although the study is focused on the evaluation of voiced sounds on the skin surface, an extension of the proposed methods to other bioacoustic applications is discussed.
Interspecific variation of warning calls in piranhas: a comparative analysis
Mélotte, Geoffrey; Vigouroux, Régis; Michel, Christian; Parmentier, Eric
2016-01-01
Fish sounds are known to be species-specific, possessing unique temporal and spectral features. We have recorded and compared sounds in eight piranha species to evaluate the potential role of acoustic communication as a driving force in clade diversification. All piranha species showed the same kind of sound-producing mechanism: sonic muscles originate on vertebrae and attach to a tendon surrounding the bladder ventrally. Contractions of the sound-producing muscles force swimbladder vibration and dictate the fundamental frequency. It results the calling features of the eight piranha species logically share many common characteristics. In all the species, the calls are harmonic sounds composed of multiple continuous cycles. However, the sounds of Serrasalmus elongatus (higher number of cycles and high fundamental frequency) and S. manueli (long cycle periods and low fundamental frequency) are clearly distinguishable from the other species. The sonic mechanism being largely conserved throughout piranha evolution, acoustic communication can hardly be considered as the main driving force in the diversification process. However, sounds of some species are clearly distinguishable despite the short space for variations supporting the need for specific communication. Behavioural studies are needed to clearly understand the eventual role of the calls during spawning events. PMID:27782184
[Industrial sound spectrum entailing noise-induced occupational hearing loss in Iasi industry].
Carp, Cristina Maria; Costinescu, V N
2011-01-01
In European Union every day millions of employees are exposed to noise at work and the risk this can entail. this study presents the sound spectrum in Iasi heavy industry: metal foundries industry, punching and embossing of metal sheets, cold and hot metal processing. it was used a type 2 Sound Level Meter (SLM) and the considered value was the average value over 10 test values in 10 consecutive days for each octave band in common audible frequency range. It is obviously that the large values of sound intensities in the most of frequency octave band exceed maximum admissible and legal values. The study reveals the necessity of hardware, medical and managerial measures in order to reduce the occupational noise and to prevent the hearing acuity damage of the workers.
Ultrathin metasurface with high absorptance for waterborne sound
NASA Astrophysics Data System (ADS)
Mei, Jun; Zhang, Xiujuan; Wu, Ying
2018-03-01
We present a design for an acoustic metasurface which can efficiently absorb low-frequency sound energy in water. The metasurface has a simple structure and consists of only two common materials: i.e., water and silicone rubber. The optimized material and geometrical parameters of the designed metasurface are determined by an analytic formula in conjunction with an iterative process based on the retrieval method. Although the metasurface is as thin as 0.15 of the wavelength, it can absorb 99.7% of the normally incident sound wave energy. Furthermore, the metasurface maintains a substantially high absorptance over a relatively broad bandwidth, and also works well for oblique incidence with an incident angle of up to 50°. Potential applications in the field of underwater sound isolation are expected.
Störmer, Viola; Feng, Wenfeng; Martinez, Antigona; McDonald, John; Hillyard, Steven
2016-03-01
Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194-9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10-14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240-400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.
Neilans, Erikson G; Dent, Micheal L
2015-02-01
Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and partially overlapping pure tones using the same psychophysical procedures. Species differences were found between budgerigars and humans in how partially overlapping sounds were perceived, with budgerigars more likely to segregate overlapping sounds and humans more apt to fuse the 2 sounds together. The results also illustrated that temporal cues are particularly important for stream segregation of overlapping sounds. Lastly, budgerigars were found to segregate partially overlapping sounds in a manner predicted by computational models of streaming, whereas humans were not. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Speaker Invariance for Phonetic Information: an fMRI Investigation
Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.
2012-01-01
The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714
Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals
Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro
2012-01-01
Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497
Ultrasound Images of the Tongue: A Tutorial for Assessment and Remediation of Speech Sound Errors.
Preston, Jonathan L; McAllister Byun, Tara; Boyce, Suzanne E; Hamilton, Sarah; Tiede, Mark; Phillips, Emily; Rivera-Campos, Ahmed; Whalen, Douglas H
2017-01-03
Diagnostic ultrasound imaging has been a common tool in medical practice for several decades. It provides a safe and effective method for imaging structures internal to the body. There has been a recent increase in the use of ultrasound technology to visualize the shape and movements of the tongue during speech, both in typical speakers and in clinical populations. Ultrasound imaging of speech has greatly expanded our understanding of how sounds articulated with the tongue (lingual sounds) are produced. Such information can be particularly valuable for speech-language pathologists. Among other advantages, ultrasound images can be used during speech therapy to provide (1) illustrative models of typical (i.e. "correct") tongue configurations for speech sounds, and (2) a source of insight into the articulatory nature of deviant productions. The images can also be used as an additional source of feedback for clinical populations learning to distinguish their better productions from their incorrect productions, en route to establishing more effective articulatory habits. Ultrasound feedback is increasingly used by scientists and clinicians as both the expertise of the users increases and as the expense of the equipment declines. In this tutorial, procedures are presented for collecting ultrasound images of the tongue in a clinical context. We illustrate these procedures in an extended example featuring one common error sound, American English /r/. Images of correct and distorted /r/ are used to demonstrate (1) how to interpret ultrasound images, (2) how to assess tongue shape during production of speech sounds, (3), how to categorize tongue shape errors, and (4), how to provide visual feedback to elicit a more appropriate and functional tongue shape. We present a sample protocol for using real-time ultrasound images of the tongue for visual feedback to remediate speech sound errors. Additionally, example data are shown to illustrate outcomes with the procedure.
Kastelein, Ronald A; Helder-Hoek, Lean; Van de Voorde, Shirley
2017-10-01
Safety criteria for naval sonar sounds are needed to protect harbor porpoise hearing. Two porpoises were exposed to sequences of AN/SQS-53C sonar playback sounds (3.5-4.1 kHz, without significant harmonics), at a mean received sound pressure level of 142 dB re 1 μPa, with a duty cycle of 96% (almost continuous). Behavioral hearing thresholds at 4 and 5.7 kHz were determined before and after exposure to the fatiguing sound, in order to quantify temporary threshold shifts (TTSs) and hearing recovery. Control sessions were also conducted. Significant mean initial TTS 1-4 of 5.2 dB at 4 kHz and 3.1 dB at 5.7 kHz occurred after 30 min exposures (mean received cumulative sound exposure level, SEL cum : 175 dB re 1 μPa 2 s). Hearing thresholds returned to pre-exposure levels within 12 min. Significant mean initial TTS 1-4 of 5.5 dB at 4 kHz occurred after 60 min exposures (SEL cum : 178 dB re 1 μPa 2 s). Hearing recovered within 60 min. The SEL cum for AN/SQS-53C sonar sounds required to induce 6 dB of TTS 4 min after exposure (the definition of TTS onset) is expected to be between 175 and 180 dB re 1 μPa 2 s.
NASA Astrophysics Data System (ADS)
West, Eva
2012-11-01
Researchers have highlighted the increasing problem of loud sounds among young people in leisure-time environments, recently even emphasizing portable music players, because of the risk of suffering from hearing impairments such as tinnitus. However, there is a lack of studies investigating compulsory-school students' standpoints and explanations in connection with teaching interventions integrating school subject content with auditory health. In addition, there are few health-related studies in the international science education literature. This paper explores students' standpoints on loud sounds including the use of hearing-protection devices in connection with a teaching intervention based on a teaching-learning sequence about sound, hearing and auditory health. Questionnaire data from 199 students, in grades 4, 7 and 8 (aged 10-14), from pre-, post- and delayed post-tests were analysed. Additionally, information on their experiences of tinnitus as well as their listening habits regarding portable music players was collected. The results show that more students make healthier choices in questions of loud sounds after the intervention, and especially among the older ones this result remains or is further improved one year later. There are also signs of positive behavioural change in relation to loud sounds. Significant gender differences are found; generally, the girls show more healthy standpoints and expressions than boys do. If this can be considered to be an outcome of students' improved and integrated knowledge about sound, hearing and health, then this emphasizes the importance of integrating health issues into regular school science.
Intensive treatment of speech disorders in robin sequence: a case report.
Pinto, Maria Daniela Borro; Pegoraro-Krook, Maria Inês; Andrade, Laura Katarine Félix de; Correa, Ana Paula Carvalho; Rosa-Lugo, Linda Iris; Dutka, Jeniffer de Cássia Rillo
2017-10-23
To describe the speech of a patient with Pierre Robin Sequence (PRS) and severe speech disorders before and after participating in an Intensive Speech Therapy Program (ISTP). The ISTP consisted of two daily sessions of therapy over a 36-week period, resulting in a total of 360 therapy sessions. The sessions included the phases of establishment, generalization, and maintenance. A combination of strategies, such as modified contrast therapy and speech sound perception training, were used to elicit adequate place of articulation. The ISTP addressed correction of place of production of oral consonants and maximization of movement of the pharyngeal walls with a speech bulb reduction program. Therapy targets were addressed at the phonetic level with a gradual increase in the complexity of the productions hierarchically (e.g., syllables, words, phrases, conversation) while simultaneously addressing the velopharyngeal hypodynamism with speech bulb reductions. Re-evaluation after the ISTP revealed normal speech resonance and articulation with the speech bulb. Nasoendoscopic assessment indicated consistent velopharyngeal closure for all oral sounds with the speech bulb in place. Intensive speech therapy, combined with the use of the speech bulb, yielded positive outcomes in the rehabilitation of a clinical case with severe speech disorders associated with velopharyngeal dysfunction in Pierre Robin Sequence.
Human emotions track changes in the acoustic environment
Ma, Weiyi; Thompson, William Forde
2015-01-01
Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin’s hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds. PMID:26553987
Harmonic template neurons in primate auditory cortex underlying complex sound processing
Feng, Lei
2017-01-01
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music. PMID:28096341
Participation of the Classical Speech Areas in Auditory Long-Term Memory
Karabanov, Anke Ninija; Paine, Rainer; Chao, Chi Chao; Schulze, Katrin; Scott, Brian; Hallett, Mark; Mishkin, Mortimer
2015-01-01
Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results. PMID:25815813
Participation of the classical speech areas in auditory long-term memory.
Karabanov, Anke Ninija; Paine, Rainer; Chao, Chi Chao; Schulze, Katrin; Scott, Brian; Hallett, Mark; Mishkin, Mortimer
2015-01-01
Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.
Listening to sound patterns as a dynamic activity
NASA Astrophysics Data System (ADS)
Jones, Mari Riess
2003-04-01
The act of listening to a series of sounds created by some natural event is described as involving an entrainmentlike process that transpires in real time. Some aspects of this dynamic process are suggested. In particular, real-time attending is described in terms of an adaptive synchronization activity that permits a listener to target attending energy to forthcoming elements within an acoustical pattern (e.g., music, speech, etc.). Also described are several experiments that illustrate features of this approach as it applies to attending to musiclike patterns. These involve listeners' responses to changes in either the timing or the pitch structure (or both) of various acoustical sequences.
Movement goals and feedback and feedforward control mechanisms in speech production
Perkell, Joseph S.
2010-01-01
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences. PMID:22661828
Handel, Stephen; Todd, Sean K; Zoidis, Ann M
2009-06-01
The hierarchical organization of the male humpback whale song has been well documented. However, it is unknown how singers keep these intricate songs intact over multiple repetitions or how they learn variations that occur sequentially during each mating season. Rather than focus on the sequence of sounds within a song, results presented here demonstrate that the individual sounds are organized into rhythmic groups that make the production and perception of the lengthy songs tractable by yielding a set of simple groups that, although arranged in rigid order, can be repeated multiple times to generate the entire song.
Movement goals and feedback and feedforward control mechanisms in speech production.
Perkell, Joseph S
2012-09-01
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.
Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae
Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063
Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.
Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.
Pragmatics: Teaching Natural Conversation
ERIC Educational Resources Information Center
Houck, Noel R., Ed.; Tatsuki, Donna H., Ed.
2011-01-01
This volume offers teachers in the ESL/EFL classroom some of the first published materials for guiding learners past grammar into authentic-sounding (conventional) utterances and sequences, replacing the scripted unnatural or stilted dialogue provided in textbooks. Teachers will find a range of pedagogical activities to put to immediate use in the…
Projector Center: Slide-Tape Presentations on a Classroom Budget.
ERIC Educational Resources Information Center
Barman, Charles R., Ed.
1984-01-01
Presented is a recommended sequence for developing a slide-tape presentation. Steps include selecting a topic, determining objectives for the presentation, constructing a storyboard, writing the script, and recording the script. Comments on use of quotation, sound effects, built-in pauses, and use of student voices are included. (JN)
The Influence of Phonotactic Probability on Word Recognition in Toddlers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Shafer, Valerie L.; Schwartz, Richard G.; Marton, Klara
2014-01-01
This study examined the influence of phonotactic probability on word recognition in English-speaking toddlers. Typically developing toddlers completed a preferential looking paradigm using familiar words, which consisted of either high or low phonotactic probability sound sequences. The participants' looking behavior was recorded in response to…
An Experimental Analysis of Memory Processing
ERIC Educational Resources Information Center
Wright, Anthony A.
2007-01-01
Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory…
Listening Natively across Perceptual Domains?
ERIC Educational Resources Information Center
Langus, Alan; Seyed-Allaei, Shima; Uysal, Ertugrul; Pirmoradian, Sahar; Marino, Caterina; Asaadi, Sina; Eren, Ömer; Toro, Juan M.; Peña, Marcela; Bion, Ricardo A. H.; Nespor, Marina
2016-01-01
Our native tongue influences the way we perceive other languages. But does it also determine the way we perceive nonlinguistic sounds? The authors investigated how speakers of Italian, Turkish, and Persian group sequences of syllables, tones, or visual shapes alternating in either frequency or duration. We found strong native listening effects…
ERIC Educational Resources Information Center
Blake, Robert R.; Mouton, Jane Srygley
1979-01-01
The authors state that organizational development (OD) consultants are reluctant to rely upon instruments because this would diminish their sense of usefulness. They discuss 15 OD issues and conclude that OD instruments must be based on sound principles of behavior and sequenced in a planned way in order to implement organizational change and…
Music Learning in Your School Computer Lab.
ERIC Educational Resources Information Center
Reese, Sam
1998-01-01
States that a growing number of schools are installing general computer labs equipped to use notation, accompaniment, and sequencing software independent of MIDI keyboards. Discusses (1) how to configure the software without MIDI keyboards or external sound modules, (2) using the actual MIDI software, (3) inexpensive enhancements, and (4) the…
Phonological Stereotypes and Names in Temne.
ERIC Educational Resources Information Center
Nemer, Julie F.
1987-01-01
Many personal names in Temne (a Mel language spoken in Sierra Leone) are borrowed from other languages, containing foreign sounds and sequences which are unpronounceable for Temne speakers when they appear in other words. These exceptions are treated as instances of phonological stereotyping (cases remaining resistant to assimilation processes).…
Chen, Yi-Chuan; Huang, Pi-Chun; Woods, Andy; Spence, Charles
2016-05-27
It has been suggested that the Bouba/Kiki effect, in which meaningless speech sounds are systematically mapped onto rounded or angular shapes, reflects a universal crossmodal correspondence between audition and vision. Here, radial frequency (RF) patterns were adapted in order to compare the Bouba/Kiki effect in Eastern and Western participants demonstrating different perceptual styles. Three attributes of the RF patterns were manipulated: The frequency, amplitude, and spikiness of the sinusoidal modulations along the circumference of a circle. By testing participants in the US and Taiwan, both cultural commonalities and differences in sound-shape correspondence were revealed. RF patterns were more likely to be matched with "Kiki" than with "Bouba" when the frequency, amplitude, and spikiness increased. The responses from both groups of participants had a similar weighting on frequency; nevertheless, the North Americans had a higher weighting on amplitude, but a lower weighting on spikiness, than their Taiwanese counterparts. These novel results regarding cultural differences suggest that the Bouba/Kiki effect is partly tuned by differing perceptual experience. In addition, using the RF patterns in the Bouba/Kiki effect provides a "mid-level" linkage between visual and auditory processing, and a future understanding of sound-shape correspondences based on the mechanism of visual pattern processing.
Marine mammal audibility of selected shallow-water survey sources.
MacGillivray, Alexander O; Racca, Roberto; Li, Zizheng
2014-01-01
Most attention about the acoustic effects of marine survey sound sources on marine mammals has focused on airgun arrays, with other common sources receiving less scrutiny. Sound levels above hearing threshold (sensation levels) were modeled for six marine mammal species and seven different survey sources in shallow water. The model indicated that odontocetes were most likely to hear sounds from mid-frequency sources (fishery, communication, and hydrographic systems), mysticetes from low-frequency sources (sub-bottom profiler and airguns), and pinnipeds from both mid- and low-frequency sources. High-frequency sources (side-scan and multibeam) generated the lowest estimated sensation levels for all marine mammal species groups.
Chakalov, Ivan; Draganova, Rossitza; Wollbrink, Andreas; Preissl, Hubert; Pantev, Christo
2012-06-20
The aim of the present study was to identify a specific neuronal correlate underlying the pre-attentive auditory stream segregation of subsequent sound patterns alternating in spectral or temporal cues. Fifteen participants with normal hearing were presented with series' of two consecutive ABA auditory tone-triplet sequences, the initial triplets being the Adaptation sequence and the subsequent triplets being the Test sequence. In the first experiment, the frequency separation (delta-f) between A and B tones in the sequences was varied by 2, 4 and 10 semitones. In the second experiment, a constant delta-f of 6 semitones was maintained but the Inter-Stimulus Intervals (ISIs) between A and B tones were varied. Auditory evoked magnetic fields (AEFs) were recorded using magnetoencephalography (MEG). Participants watched a muted video of their choice and ignored the auditory stimuli. In a subsequent behavioral study both MEG experiments were replicated to provide information about the participants' perceptual state. MEG measurements showed a significant increase in the amplitude of the B-tone related P1 component of the AEFs as delta-f increased. This effect was seen predominantly in the left hemisphere. A significant increase in the amplitude of the N1 component was only obtained for a Test sequence delta-f of 10 semitones with a prior Adaptation sequence of 2 semitones. This effect was more pronounced in the right hemisphere. The additional behavioral data indicated an increased probability of two-stream perception for delta-f = 4 and delta-f = 10 semitones with a preceding Adaptation sequence of 2 semitones. However, neither the neural activity nor the perception of the successive streaming sequences were modulated when the ISIs were alternated. Our MEG experiment demonstrated differences in the behavior of P1 and N1 components during the automatic segregation of sounds when induced by an initial Adaptation sequence. The P1 component appeared enhanced in all Test-conditions and thus demonstrates the preceding context effect, whereas N1 was specifically modulated only by large delta-f Test sequences induced by a preceding small delta-f Adaptation sequence. These results suggest that P1 and N1 components represent at least partially-different systems that underlie the neural representation of auditory streaming.
Metagenomic Profiling of Microbial Composition and Antibiotic Resistance Determinants in Puget Sound
Port, Jesse A.; Wallace, James C.; Griffith, William C.; Faustman, Elaine M.
2012-01-01
Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ∼550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial public health monitoring as well as more targeted and functionally-based investigations. PMID:23144718
Enamel-Caries Prevention Using Two Applications of Fluoride-Laser Sequence.
Noureldin, Amal; Quintanilla, Ines; Kontogiorgos, Elias; Jones, Daniel
2016-03-01
Studies demonstrated a significant synergism between fluoride and laser in reduction of enamel solubility. However, minimal research has focused on testing the sequence of their application and no other research investigated the preventive effect of repeated applications of a combined treatment. This study investigated the effect of two applications of fluoride-laser sequence on the resistance of sound enamel to cariogenic challenge compared to one-time application. Sixty enamel slabs were cut from 10 human incisors, ground flat, polished and coated with nail varnish except a 2 x 2 mm window. Specimens were randomly assigned into five groups of 12 specimens; (CON-) negative-control received no treatment, (CON+) positive-control received pH challenge, (FV) treated with M fluoride varnish, (F-L1) one-application fluoride-varnish followed by CO2 laser-treatment (short-pulsed 10.6 µm, 2.4J/ cm2, 10HZ, 10sec), and (F-L2) two-applications of fluoride varnish-laser treatment. Specimens were left in distilled water for one day between applications. Except CON-, all groups were submitted to pH cycling for 9-days (8 demin/ remin + 1 day remineralisation bath) at 37°C. Enamel demineralization was quantitatively evaluated by measurement of Knoop surface-microhardness (SM H) (50-grams/10 seconds). Data were analyzed using one-way ANOVA (p ≤ 0.05) followed by Duncan's Multiple Range Test. Within the limitations of this study, it was found that one or two applications of fluoride-laser sequence significantly improved resistance of the sound enamel surface to acid attack compared to FV-treated group. Although the two applications of fluoride-laser sequence (F-L1 and F-L2) showed higher SMH values, significant resistance to demineralization was only obtained with repeated applications.
2014-08-08
ISS040-E-089959 (8 Aug. 2014) --- King Sound on the northwest coast of Australia is featured in this image photographed by an Expedition 40 crew member on the International Space Station. The Fitzroy River, one of Australia's largest, empties into the Sound, a large gulf in Western Australia (approximately 120 kilometers long). King Sound has the highest tides in Australia, in the range of 11-12 meters, the second highest in the world after the Bay of Fundy on the east coast of North America. The strong brown smudge at the head of the Sound contrasts with the clearer blue water along the rest of the coast. This is mud stirred up by the tides and also supplied by the Fitzroy River. The bright reflection point of the sun obscures the blue water of the Indian Ocean (top left). Just to the west of the Sound, thick plumes of wildfire smoke, driven by northeast winds, obscure the coastline. A wide field of “popcorn cumulus” clouds (right) is a common effect of daily heating of the ground surface.
Adults with Specific Language Impairment fail to consolidate speech sounds during sleep.
Earle, F Sayako; Landi, Nicole; Myers, Emily B
2018-02-14
Specific Language Impairment (SLI) is a common learning disability that is associated with poor speech sound representations. These differences in representational quality are thought to impose a burden on spoken language processing. The underlying mechanism to account for impoverished speech sound representations remains in debate. Previous findings that implicate sleep as important for building speech representations, combined with reports of atypical sleep in SLI, motivate the current investigation into a potential consolidation mechanism as a source of impoverished representations in SLI. In the current study, we trained individuals with SLI on a new (nonnative) set of speech sounds, and tracked their perceptual accuracy and neural responses to these sounds over two days. Adults with SLI achieved comparable performance to typical controls during training, however demonstrated a distinct lack of overnight gains on the next day. We propose that those with SLI may be impaired in the consolidation of acoustic-phonetic information. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan
2017-10-01
It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.
Location, location, location: finding a suitable home among the noise
Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.
2012-01-01
While sound is a useful cue for guiding the onshore orientation of larvae because it travels long distances underwater, it also has the potential to convey valuable information about the quality and type of the habitat at the source. Here, we provide, to our knowledge, the first evidence that settlement-stage coastal crab species can interpret and show a strong settlement and metamorphosis response to habitat-related differences in natural underwater sound. Laboratory- and field-based experiments demonstrated that time to metamorphosis in the settlement-stage larvae of common coastal crab species varied in response to different underwater sound signatures produced by different habitat types. The megalopae of five species of both temperate and tropical crabs showed a significant decrease in time to metamorphosis, when exposed to sound from their optimal settlement habitat type compared with other habitat types. These results indicate that sounds emanating from specific underwater habitats may play a major role in determining spatial patterns of recruitment in coastal crab species. PMID:22673354
Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean
2018-05-01
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kanjanapen, Manorth; Kunsombat, Cherdsak; Chiangga, Surasak
2017-09-01
The functional transformation method (FTM) is a powerful tool for detailed investigation of digital sound synthesis by the physical modeling method, the resulting sound or measured vibrational characteristics at discretized points on real instruments directly solves the underlying physical effect of partial differential equation (PDE). In this paper, we present the Higuchi’s method to examine the difference between the timbre of tone and estimate fractal dimension of musical signals which contains information about their geometrical structure that synthesizes by FTM. With the Higuchi’s method we obtain the whole process is not complicated, fast processing, with the ease of analysis without expertise in the physics or virtuoso musicians and the easiest way for the common people can judge that sounds similarly presented.
NASA Astrophysics Data System (ADS)
Hesse, C.; Papantoni, V.; Algermissen, S.; Monner, H. P.
2017-08-01
Active control of structural sound radiation is a promising technique to overcome the poor passive acoustic isolation performance of lightweight structures in the low-frequency region. Active structural acoustic control commonly aims at the suppression of the far-field radiated sound power. This paper is concerned with the active control of sound radiation into acoustic enclosures. Experimental results of a coupled rectangular plate-fluid system under stochastic excitation are presented. The amplitudes of the frequency-independent interior radiation modes are determined in real-time using a set of structural vibration sensors, for the purpose of estimating their contribution to the acoustic potential energy in the enclosure. This approach is validated by acoustic measurements inside the cavity. Utilizing a feedback control approach, a broadband reduction of the global acoustic response inside the enclosure is achieved.
Sounds of silence: synonymous nucleotides as a key to biological regulation and complexity
Shabalina, Svetlana A.; Spiridonov, Nikolay A.; Kashina, Anna
2013-01-01
Messenger RNA is a key component of an intricate regulatory network of its own. It accommodates numerous nucleotide signals that overlap protein coding sequences and are responsible for multiple levels of regulation and generation of biological complexity. A wealth of structural and regulatory information, which mRNA carries in addition to the encoded amino acid sequence, raises the question of how these signals and overlapping codes are delineated along non-synonymous and synonymous positions in protein coding regions, especially in eukaryotes. Silent or synonymous codon positions, which do not determine amino acid sequences of the encoded proteins, define mRNA secondary structure and stability and affect the rate of translation, folding and post-translational modifications of nascent polypeptides. The RNA level selection is acting on synonymous sites in both prokaryotes and eukaryotes and is more common than previously thought. Selection pressure on the coding gene regions follows three-nucleotide periodic pattern of nucleotide base-pairing in mRNA, which is imposed by the genetic code. Synonymous positions of the coding regions have a higher level of hybridization potential relative to non-synonymous positions, and are multifunctional in their regulatory and structural roles. Recent experimental evidence and analysis of mRNA structure and interspecies conservation suggest that there is an evolutionary tradeoff between selective pressure acting at the RNA and protein levels. Here we provide a comprehensive overview of the studies that define the role of silent positions in regulating RNA structure and processing that exert downstream effects on proteins and their functions. PMID:23293005
Transformation of temporal sequences in the zebra finch auditory system
Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J
2016-01-01
This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971
Application of semi-supervised deep learning to lung sound analysis.
Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon
2016-08-01
The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically N<;20) and usually limited to a single type of lung sound. Larger research studies have also been impeded by the challenge of labeling large volumes of data, which is extremely labor-intensive. In this paper, we present the development of a semi-supervised deep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.
Applying cybernetic technology to diagnose human pulmonary sounds.
Chen, Mei-Yung; Chou, Cheng-Han
2014-06-01
Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.
Pneumothorax effects on pulmonary acoustic transmission.
Mansy, Hansen A; Balk, Robert A; Warren, William H; Royston, Thomas J; Dai, Zoujun; Peng, Ying; Sandler, Richard H
2015-08-01
Pneumothorax (PTX) is an abnormal accumulation of air between the lung and the chest wall. It is a relatively common and potentially life-threatening condition encountered in patients who are critically ill or have experienced trauma. Auscultatory signs of PTX include decreased breath sounds during the physical examination. The objective of this exploratory study was to investigate the changes in sound transmission in the thorax due to PTX in humans. Nineteen human subjects who underwent video-assisted thoracic surgery, during which lung collapse is a normal part of the surgery, participated in the study. After subjects were intubated and mechanically ventilated, sounds were introduced into their airways via an endotracheal tube. Sounds were then measured over the chest surface before and after lung collapse. PTX caused small changes in acoustic transmission for frequencies below 400 Hz. A larger decrease in sound transmission was observed from 400 to 600 Hz, possibly due to the stronger acoustic transmission blocking of the pleural air. At frequencies above 1 kHz, the sound waves became weaker and so did their changes with PTX. The study elucidated some of the possible mechanisms of sound propagation changes with PTX. Sound transmission measurement was able to distinguish between baseline and PTX states in this small patient group. Future studies are needed to evaluate this technique in a wider population. Copyright © 2015 the American Physiological Society.
Decoding sound level in the marmoset primary auditory cortex.
Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L
2017-10-01
Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.
Graded behavioral responses and habituation to sound in the common cuttlefish Sepia officinalis.
Samson, Julia E; Mooney, T Aran; Gussekloo, Sander W S; Hanlon, Roger T
2014-12-15
Sound is a widely available and vital cue in aquatic environments, yet most bioacoustic research has focused on marine vertebrates, leaving sound detection in invertebrates poorly understood. Cephalopods are an ecologically key taxon that likely use sound and may be impacted by increasing anthropogenic ocean noise, but little is known regarding their behavioral responses or adaptations to sound stimuli. These experiments identify the acoustic range and levels that elicit a wide range of secondary defense behaviors such as inking, jetting and rapid coloration change. Secondarily, it was found that cuttlefish habituate to certain sound stimuli. The present study examined the behavioral responses of 22 cuttlefish (Sepia officinalis) to pure-tone pips ranging from 80 to 1000 Hz with sound pressure levels of 85-188 dB re. 1 μPa rms and particle accelerations of 0-17.1 m s(-2). Cuttlefish escape responses (inking, jetting) were observed between frequencies of 80 and 300 Hz and at sound levels above 140 dB re. 1 μPa rms and 0.01 m s(-2) (0.74 m s(-2) for inking responses). Body patterning changes and fin movements were observed at all frequencies and sound levels. Response intensity was dependent upon stimulus amplitude and frequency, suggesting that cuttlefish also possess loudness perception with a maximum sensitivity around 150 Hz. Cuttlefish habituated to repeated 200 Hz tone pips, at two sound intensities. Total response inhibition was not reached, however, and a basal response remained present in most animals. The graded responses provide a loudness sensitivity curve and suggest an ecological function for sound use in cephalopods. © 2014. Published by The Company of Biologists Ltd.
Auditory Demonstrations for Science, Technology, Engineering, and Mathematics (STEM) Outreach
2015-01-01
were placed on a foam rubber pad. The bone vibrators were not attached to headbands, allowing students to freely experiment with the devices. Soft ...This bookmark is a visual representation of various common sounds that range from soft to very loud, with the corresponding intensity level marked...other pathway is called bone conduction. In bone conducted hearing, sound waves in bone and soft tissue are transmitted directly to the internal ear
Engineering For Ship Production: A Textbook
1986-06-01
content. (g) Bulbous Bow. Bulbous bows are wave-resistance-reducing devices. They incorporate displacement at the bow forefoot , which sets up a surface...displacement from the fore body in way of the load waterline entrance to the bow forefoot in the form of a faired-in bulb. More recently, the...install open-ended sounding tubes with striking plates welded to the tank bottom. Where the sounding tuba slopes at the end, it is common to close the
NASA Astrophysics Data System (ADS)
Feltz, M.; Knuteson, R.; Ackerman, S.; Revercomb, H.
2014-05-01
Comparisons of satellite temperature profile products from GPS radio occultation (RO) and hyperspectral infrared (IR)/microwave (MW) sounders are made using a previously developed matchup technique. The profile matchup technique matches GPS RO and IR/MW sounder profiles temporally, within 1 h, and spatially, taking into account the unique RO profile geometry and theoretical spatial resolution by calculating a ray-path averaged sounder profile. The comparisons use the GPS RO dry temperature product. Sounder minus GPS RO differences are computed and used to calculate bias and RMS profile statistics, which are created for global and 30° latitude zones for selected time periods. These statistics are created from various combinations of temperature profile data from the Constellation Observing System for Meteorology, Ionosphere & Climate (COSMIC) network, Global Navigation Satellite System Receiver for Atmospheric Sounding (GRAS) instrument, and the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding Unit (AMSU), Infrared Atmospheric Sounding Interferometer (IASI)/AMSU, and Crosstrack Infrared Sounder (CrIS)/Advanced Technology Microwave Sounder (ATMS) sounding systems. By overlaying combinations of these matchup statistics for similar time and space domains, comparisons of different sounders' products, sounder product versions, and GPS RO products can be made. The COSMIC GPS RO network has the spatial coverage, time continuity, and stability to provide a common reference for comparison of the sounder profile products. The results of this study demonstrate that GPS RO has potential to act as a common temperature reference and can help facilitate inter-comparison of sounding retrieval methods and also highlight differences among sensor product versions.
NASA Astrophysics Data System (ADS)
Feltz, M.; Knuteson, R.; Ackerman, S.; Revercomb, H.
2014-11-01
Comparisons of satellite temperature profile products from GPS radio occultation (RO) and hyperspectral infrared (IR)/microwave (MW) sounders are made using a previously developed matchup technique. The profile matchup technique matches GPS RO and IR/MW sounder profiles temporally, within 1 h, and spatially, taking into account the unique RO profile geometry and theoretical spatial resolution by calculating a ray-path averaged sounder profile. The comparisons use the GPS RO dry temperature product. Sounder minus GPS RO differences are computed and used to calculate bias and rms profile statistics, which are created for global and 30° latitude zones for selected time periods. These statistics are created from various combinations of temperature profile data from the Constellation Observing System for Meteorology, Ionosphere & Climate (COSMIC) network, Global Navigation Satellite System Receiver for Atmospheric Sounding (GRAS) instrument, and the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding Unit (AMSU), Infrared Atmospheric Sounding Interferometer (IASI)/AMSU, and Crosstrack Infrared Sounder (CrIS)/Advanced Technology Microwave Sounder (ATMS) sounding systems. By overlaying combinations of these matchup statistics for similar time and space domains, comparisons of different sounders' products, sounder product versions, and GPS RO products can be made. The COSMIC GPS RO network has the spatial coverage, time continuity, and stability to provide a common reference for comparison of the sounder profile products. The results of this study demonstrate that GPS RO has potential to act as a common temperature reference and can help facilitate inter-comparison of sounding retrieval methods and also highlight differences among sensor product versions.
Gregg, Jacob L.; Grady, Courtney A.; Thompson, Rachel L.; Purcell, Maureen K.; Friedman, Carolyn S.; Hershberger, Paul K.
2014-01-01
A combination of field surveys, molecular typing, and laboratory experiments were used to improve our understanding of the distribution and transmission mechanisms of fish parasites in the genus Ichthyophonus. Ichthyophonus spp. infections were detected from the Bering Sea to the coast of Oregon in 10 of 13 host species surveyed. Sequences of rDNA extracted from these isolates indicate that a ubiquitous Ichthyophonus type occurs in the NE Pacific Ocean and Bering Sea and accounts for nearly all the infections encountered. Among NE Pacific isolates, only parasites from yellowtail rockfish and Puget Sound rockfish varied at the DNA locus examined. These data suggest that a single source population of these parasites is available to fishes in diverse niches across a wide geographic range. A direct life cycle within a common forage species could account for the relatively low parasite diversity we encountered. In the laboratory we tested the hypothesis that waterborne transmission occurs among Pacific herring, a common NE Pacific forage species. No horizontal transmission occurred during a four-month cohabitation experiment involving infected herring and conspecific sentinels. The complete life cycle of Ichthyophonus spp. is not known, but these results suggest that system-wide processes maintain a relatively homogenous parasite population.
The Last Seat in the House: The Story of Hanley Sound
NASA Astrophysics Data System (ADS)
Kane, John
Prior to the rush of live outdoor sound during the 1950s, a young, audio-savvy Bill Hanley recognized certain inadequacies within the widely used public address system marketplace. Hanley's techniques allowed him to construct systems of sound that changed what the audience heard during outdoor events. Through my research, I reveal a new insight into how Hanley and those who worked at his business (Hanley Sound) had a direct, innovative influence on specific sound applications, which are now widely used and often taken for granted. Hanley's innovations shifted an existing public address, oral-based sound industry into a new area of technology rich with clarity and intelligibility. What makes his story so unique is that, because his relationship with sound was so intimate, it superseded his immediate economic, safety, and political concerns. He acted selflessly and with extreme focus. As Hanley's reputation grew, so did audience and performer demand for clear, audible concert sound. Over time, he would provide sound for some of the largest antiwar peace rallies and concerts in American history. Hanley worked in the thickness of extreme civil unrest, not typical for the average soundman of the day. Conveniently, Hanley's passion for clarity in sound also happened to occur when popular music transitioned into an important conveyor of political message through festivals. Since May 2011 I have been exploring the life of Bill Hanley, an innovative leader in sound. I use interdisciplinary approaches to uncover cultural, historical, social, political, and psychological occasions in Hanley's life that were imperative to his ongoing development. Filmed action sequences, such as talking head interviews (friends, family members, and professional colleagues) and historical archival 8 mm footage, and family photos and music ephemera, help uncover this qualitative ethnographic analysis of not only Bill's development but also the world around him. Reflective, intimate interviews with Hanley reveal his charismatic, innovative leadership style, which had a direct influence on those who worked for and around him. Finally, his story contains additional conflicts that I felt obligated to address. For one, the lack of financial reward that Hanley Sound faced is intriguing to me. The recognition of being one of the true pioneers in the business and not reaping the financial benefits of such efforts needed to be examined. As the industry he influenced grew around him, those who borrowed his ideas have moved forward, creating the infrastructure of contemporary sound reinforcement as we know it today. Hanley's pioneering efforts not only created the foundation of this unknown industry but also gave true definition to the term sound engineer.
MAC/FAC: A Model of Similarity-Based Retrieval. Technical Report #59.
ERIC Educational Resources Information Center
Forbus, Kenneth D.; And Others
A model of similarity-based retrieval is presented that attempts to capture these seemingly contradictory psychological phenomena: (1) structural commonalities are weighed more heavily than surface commonalities in soundness or similarity judgments (when both members are present); (2) superficial similarity is more important in retrieval from…
Structurally Sound Statistics Instruction
ERIC Educational Resources Information Center
Casey, Stephanie A.; Bostic, Jonathan D.
2016-01-01
The Common Core's Standards for Mathematical Practice (SMP) call for all K-grade 12 students to develop expertise in the processes and proficiencies of doing mathematics. However, the Common Core State Standards for Mathematics (CCSSM) (CCSSI 2010) as a whole addresses students' learning of not only mathematics but also statistics. This situation…
Relationships between early literacy and nonlinguistic rhythmic processes in kindergarteners.
Ozernov-Palchik, Ola; Wolf, Maryanne; Patel, Aniruddh D
2018-03-01
A growing number of studies report links between nonlinguistic rhythmic abilities and certain linguistic abilities, particularly phonological skills. The current study investigated the relationship between nonlinguistic rhythmic processing, phonological abilities, and early literacy abilities in kindergarteners. A distinctive aspect of the current work was the exploration of whether processing of different types of rhythmic patterns is differentially related to kindergarteners' phonological and reading-related abilities. Specifically, we examined the processing of metrical versus nonmetrical rhythmic patterns, that is, patterns capable of being subdivided into equal temporal intervals or not (Povel & Essens, 1985). This is an important comparison because most music involves metrical sequences, in which rhythm often has an underlying temporal grid of isochronous units. In contrast, nonmetrical sequences are arguably more typical to speech rhythm, which is temporally structured but does not involve an underlying grid of equal temporal units. A rhythm discrimination app with metrical and nonmetrical patterns was administered to 74 kindergarteners in conjunction with cognitive and preliteracy measures. Findings support a relationship among rhythm perception, phonological awareness, and letter-sound knowledge (an essential precursor of reading). A mediation analysis revealed that the association between rhythm perception and letter-sound knowledge is mediated through phonological awareness. Furthermore, metrical perception accounted for unique variance in letter-sound knowledge above all other language and cognitive measures. These results point to a unique role for temporal regularity processing in the association between musical rhythm and literacy in young children. Copyright © 2017 Elsevier Inc. All rights reserved.
Interjections in the EFL Classroom: Teaching Sounds and Sequences
ERIC Educational Resources Information Center
Reber, Elisabeth
2011-01-01
In line with a communicative curriculum for English, it is claimed that communicative competence involves knowledge about when and how to display affectivity in talk-in-interaction. Typically, interjections have been described as a lexical means for expressions of emotion. A survey of textbooks canonical of EFL at German elementary and secondary…
ERIC Educational Resources Information Center
May Bernhardt, B.; Hanson, R.; Perez, D.; Ávila, C.; Lleó, C.; Stemberger, J. P.; Carballo, G.; Mendoza, E.; Fresneda, D.; Chávez-Peón, M.
2015-01-01
Background: Research on children's word structure development is limited. Yet, phonological intervention aims to accelerate the acquisition of both speech-sounds and word structure, such as word length, stress or shapes in CV sequences. Until normative studies and meta-analyses provide in-depth information on this topic, smaller investigations can…
IMPLICATIONS OF RESEARCH FOR TEACHING TYPEWRITING. DELTA PI EPSILON RESEARCH BULLETIN 2.
ERIC Educational Resources Information Center
WEST, LEONARD J.
INTERPRETATIONS AND RECOMMENDATIONS BELIEVED TO BE SOUNDLY SUPPORTED BY RESEARCH, ARE PRESENTED IN THIS NONTECHNICAL REPORT. THE MATERIAL SHOULD BE ESPECIALLY USEFUL IN METHODS COURSES AND SHOULD BE A VALUABLE REFERENCE FOR TEACHERS OF TYPEWRITING. STUDENTS SHOULD NOT PRACTICE ON ISOLATED LETTER SEQUENCES OR ON A LIMITED VOCABULARY. WITH RESPECT…
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Graphemic Cohesion Effect in Reading and Writing Complex Graphemes
ERIC Educational Resources Information Center
Spinelli, Elsa; Kandel, Sonia; Guerassimovitch, Helena; Ferrand, Ludovic
2012-01-01
"AU" /o/ and "AN" /a/ in French are both complex graphemes, but they vary in their strength of association to their respective sounds. The letter sequence "AU" is systematically associated to the phoneme /o/, and as such is always parsed as a complex grapheme. However, "AN" can be associated with either one…
ERIC Educational Resources Information Center
Abla, Dilshat; Okanoya, Kazuo
2008-01-01
Word segmentation, that is, discovering the boundaries between words that are embedded in a continuous speech stream, is an important faculty for language learners; humans solve this task partly by calculating transitional probabilities between sounds. Behavioral and ERP studies suggest that detection of sequential probabilities (statistical…
Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram
2011-01-01
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.
Blue whale vocalizations recorded around New Zealand: 1964-2013.
Miller, Brian S; Collins, Kym; Barlow, Jay; Calderan, Susannah; Leaper, Russell; McDonald, Mark; Ensor, Paul; Olson, Paula A; Olavarria, Carlos; Double, Michael C
2014-03-01
Previous underwater recordings made in New Zealand have identified a complex sequence of low frequency sounds that have been attributed to blue whales based on similarity to blue whale songs in other areas. Recordings of sounds with these characteristics were made opportunistically during the Southern Ocean Research Partnership's recent Antarctic Blue Whale Voyage. Detections of these sounds occurred all around the South Island of New Zealand during the voyage transits from Nelson, New Zealand to the Antarctic and return. By following acoustic bearings from directional sonobuoys, blue whales were visually detected and confirmed as the source of these sounds. These recordings, together with the historical recordings made northeast of New Zealand, indicate song types that persist over several decades and are indicative of the year-round presence of a population of blue whales that inhabits the waters around New Zealand. Measurements of the four-part vocalizations reveal that blue whale song in this region has changed slowly, but consistently over the past 50 years. The most intense units of these calls were detected as far south as 53°S, which represents a considerable range extension compared to the limited prior data on the spatial distribution of this population.
Underwater temporary threshold shift in pinnipeds: effects of noise level and duration.
Kastak, David; Southall, Brandon L; Schusterman, Ronald J; Kastak, Colleen Reichmuth
2005-11-01
Behavioral psychophysical techniques were used to evaluate the residual effects of underwater noise on the hearing sensitivity of three pinnipeds: a California sea lion (Zalophus californianus), a harbor seal (Phoca vitulina), and a northern elephant seal (Mirounga angustirostris). Temporary threshold shift (TTS), defined as the difference between auditory thresholds obtained before and after noise exposure, was assessed. The subjects were exposed to octave-band noise centered at 2500 Hz at two sound pressure levels: 80 and 95 dB SL (re: auditory threshold at 2500 Hz). Noise exposure durations were 22, 25, and 50 min. Threshold shifts were assessed at 2500 and 3530 Hz. Mean threshold shifts ranged from 2.9-12.2 dB. Full recovery of auditory sensitivity occurred within 24 h of noise exposure. Control sequences, comprising sham noise exposures, did not result in significant mean threshold shifts for any subject. Threshold shift magnitudes increased with increasing noise sound exposure level (SEL) for two of the three subjects. The results underscore the importance of including sound exposure metrics (incorporating sound pressure level and exposure duration) in order to fully assess the effects of noise on marine mammal hearing.
Assessment of sound quality perception in cochlear implant users during music listening.
Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J
2012-04-01
Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this reduced sound quality. Although the effects of bass frequency removal were studied here, we advocate CI-MUSHRA as a user-friendly and versatile research tool to measure the effects of a wide range of acoustic manipulations on sound quality perception in CI users.
Frey, Johannes Daniel; Wendt, Mike; Löw, Andreas; Möller, Stephan; Zölzer, Udo; Jacobsen, Thomas
2017-02-15
Changes in room acoustics provide important clues about the environment of sound source-perceiver systems, for example, by indicating changes in the reflecting characteristics of surrounding objects. To study the detection of auditory irregularities brought about by a change in room acoustics, a passive oddball protocol with participants watching a movie was applied in this study. Acoustic stimuli were presented via headphones. Standards and deviants were created by modelling rooms of different sizes, keeping the values of the basic acoustic dimensions (e.g., frequency, duration, sound pressure, and sound source location) as constant as possible. In the first experiment, each standard and deviant stimulus consisted of sequences of three short sounds derived from sinusoidal tones, resulting in three onsets during each stimulus. Deviant stimuli elicited a Mismatch Negativity (MMN) as well as two additional negative deflections corresponding to the three onset peaks. In the second experiment, only one sound was used; the stimuli were otherwise identical to the ones used in the first experiment. Again, an MMN was observed, followed by an additional negative deflection. These results provide further support for the hypothesis of automatic detection of unattended changes in room acoustics, extending previous work by demonstrating the elicitation of an MMN by changes in room acoustics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hughes, Robert W; Vachon, François; Jones, Dylan M
2005-07-01
A novel attentional capture effect is reported in which visual-verbal serial recall was disrupted if a single deviation in the interstimulus interval occurred within otherwise regularly presented task-irrelevant spoken items. The degree of disruption was the same whether the temporal deviant was embedded in a sequence made up of a repeating item or a sequence of changing items. Moreover, the effect was evident during the presentation of the to-be-remembered sequence but not during rehearsal just prior to recall, suggesting that the encoding of sequences is particularly susceptible. The results suggest that attentional capture is due to a violation of an algorithm rather than an aggregate-based neural model and further undermine an attentional capture-based account of the classical changing-state irrelevant sound effect. ((c) 2005 APA, all rights reserved).
Representations of Pitch and Timbre Variation in Human Auditory Cortex
2017-01-01
Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255
2011-01-01
Background Common carp is one of the most important aquaculture teleost fish in the world. Common carp and other closely related Cyprinidae species provide over 30% aquaculture production in the world. However, common carp genomic resources are still relatively underdeveloped. BAC end sequences (BES) are important resources for genome research on BAC-anchored genetic marker development, linkage map and physical map integration, and whole genome sequence assembling and scaffolding. Result To develop such valuable resources in common carp (Cyprinus carpio), a total of 40,224 BAC clones were sequenced on both ends, generating 65,720 clean BES with an average read length of 647 bp after sequence processing, representing 42,522,168 bp or 2.5% of common carp genome. The first survey of common carp genome was conducted with various bioinformatics tools. The common carp genome contains over 17.3% of repetitive elements with GC content of 36.8% and 518 transposon ORFs. To identify and develop BAC-anchored microsatellite markers, a total of 13,581 microsatellites were detected from 10,355 BES. The coding region of 7,127 genes were recognized from 9,443 BES on 7,453 BACs, with 1,990 BACs have genes on both ends. To evaluate the similarity to the genome of closely related zebrafish, BES of common carp were aligned against zebrafish genome. A total of 39,335 BES of common carp have conserved homologs on zebrafish genome which demonstrated the high similarity between zebrafish and common carp genomes, indicating the feasibility of comparative mapping between zebrafish and common carp once we have physical map of common carp. Conclusion BAC end sequences are great resources for the first genome wide survey of common carp. The repetitive DNA was estimated to be approximate 28% of common carp genome, indicating the higher complexity of the genome. Comparative analysis had mapped around 40,000 BES to zebrafish genome and established over 3,100 microsyntenies, covering over 50% of the zebrafish genome. BES of common carp are tremendous tools for comparative mapping between the two closely related species, zebrafish and common carp, which should facilitate both structural and functional genome analysis in common carp. PMID:21492448
Soldier Perceptions of the Rapid Decision Trainer
2005-05-01
utility. "* Integrated 3D spatialized sound, supporting SCORM Integration the most common sound formats including Wav and Midi . A major objective of this...34low" and "very low" ratings in a similar manner for the lowest ratings categories. Pre-LFX Questionnaire Overall training value of the RDT. Lieutenants...on school computers and has issues are similar to ActiveX, however applets issued the RDT on CD-ROM to each IOBC student installed on the client
Schlieren imaging of loud sounds and weak shock waves in air near the limit of visibility
NASA Astrophysics Data System (ADS)
Hargather, Michael John; Settles, Gary S.; Madalis, Matthew J.
2010-02-01
A large schlieren system with exceptional sensitivity and a high-speed digital camera are used to visualize loud sounds and a variety of common phenomena that produce weak shock waves in the atmosphere. Frame rates varied from 10,000 to 30,000 frames/s with microsecond frame exposures. Sound waves become visible to this instrumentation at frequencies above 10 kHz and sound pressure levels in the 110 dB (6.3 Pa) range and above. The density gradient produced by a weak shock wave is examined and found to depend upon the profile and thickness of the shock as well as the density difference across it. Schlieren visualizations of weak shock waves from common phenomena include loud trumpet notes, various impact phenomena that compress a bubble of air, bursting a toy balloon, popping a champagne cork, snapping a wooden stick, and snapping a wet towel. The balloon burst, snapping a ruler on a table, and snapping the towel and a leather belt all produced readily visible shock-wave phenomena. In contrast, clapping the hands, snapping the stick, and the champagne cork all produced wave trains that were near the weak limit of visibility. Overall, with sensitive optics and a modern high-speed camera, many nonlinear acoustic phenomena in the air can be observed and studied.
NASA Astrophysics Data System (ADS)
Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.
2015-01-01
This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.
An open access database for the evaluation of heart sound algorithms.
Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D
2016-12-01
In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.
Detecting regular sound changes in linguistics as events of concerted evolution.
Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy
2015-01-05
Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L
2017-01-01
Introduction Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Methods Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1–59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Results Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Conclusions Listening panel and case–control data suggests our methodology is feasible, likely valid and that small airway inflammation is common in WHO pneumonia. Digital auscultation may be an important future pneumonia diagnostic in developing countries. PMID:28883927
McCollum, Eric D; Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L
2017-01-01
Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1-59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Listening panel and case-control data suggests our methodology is feasible, likely valid and that small airway inflammation is common in WHO pneumonia. Digital auscultation may be an important future pneumonia diagnostic in developing countries.
How does experience modulate auditory spatial processing in individuals with blindness?
Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C
2015-05-01
Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.
NASA Astrophysics Data System (ADS)
O'Donnell, Michael J.; Bisnovatyi, Ilia
2000-11-01
Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer scientists with that of numerical mathematicians studying sonification, psychologists, linguists, bioacousticians, and musicians to illuminate the structure of sound from different angles. Each of these disciplines deals with the use of sound to carry a different sort of information, under different requirements and constraints. By combining their insights, we can learn to understand of the structure of sound in general.
Analysis of different techniques to improve sound transmission loss in cylindrical shells
NASA Astrophysics Data System (ADS)
Oliazadeh, Pouria; Farshidianfar, Anooshiravan
2017-02-01
In this study, sound transmission through double- and triple-walled shells is investigated. The structure-acoustic equations based on Donnell's shell theory are presented and transmission losses calculated by this approach are compared with the transmission losses obtained according to Love's theory. An experimental set-up is also constructed to compare natural frequencies obtained from Donnell and Love's theories with experimental results in the high frequency region. Both comparisons show that Donnell's theory predicts the sound transmission characteristics and vibrational behavior better than Love's theory in the high frequency region. The transmission losses of the double- and triple-walled construction are then presented for various radii and thicknesses. Then the effects of air gap size as an important design parameter are studied. Sound transmission characteristics through a circular cylindrical shell are also computed along with consideration of the effects of material damping. Modest absorption is shown to greatly reduce the sound transmission at ring frequency and coincidence frequency. Also the effects of five common gases that are used for filling the gap are investigated.
Dual-Pitch Processing Mechanisms in Primate Auditory Cortex
Bendor, Daniel; Osmanski, Michael S.
2012-01-01
Pitch, our perception of how high or low a sound is on a musical scale, is a fundamental perceptual attribute of sounds and is important for both music and speech. After more than a century of research, the exact mechanisms used by the auditory system to extract pitch are still being debated. Theoretically, pitch can be computed using either spectral or temporal acoustic features of a sound. We have investigated how cues derived from the temporal envelope and spectrum of an acoustic signal are used for pitch extraction in the common marmoset (Callithrix jacchus), a vocal primate species, by measuring pitch discrimination behaviorally and examining pitch-selective neuronal responses in auditory cortex. We find that pitch is extracted by marmosets using temporal envelope cues for lower pitch sounds composed of higher-order harmonics, whereas spectral cues are used for higher pitch sounds with lower-order harmonics. Our data support dual-pitch processing mechanisms, originally proposed by psychophysicists based on human studies, whereby pitch is extracted using a combination of temporal envelope and spectral cues. PMID:23152599
Detection and Classification of Motor Vehicle Noise in a Forested Landscape
NASA Astrophysics Data System (ADS)
Brown, Casey L.; Reed, Sarah E.; Dietz, Matthew S.; Fristrup, Kurt M.
2013-11-01
Noise emanating from human activity has become a common addition to natural soundscapes and has the potential to harm wildlife and erode human enjoyment of nature. In particular, motor vehicles traveling along roads and trails produce high levels of both chronic and intermittent noise, eliciting varied responses from a wide range of animal species. Anthropogenic noise is especially conspicuous in natural areas where ambient background sound levels are low. In this article, we present an acoustic method to detect and analyze motor vehicle noise. Our approach uses inexpensive consumer products to record sound, sound analysis software to automatically detect sound events within continuous recordings and measure their acoustic properties, and statistical classification methods to categorize sound events. We describe an application of this approach to detect motor vehicle noise on paved, gravel, and natural-surface roads, and off-road vehicle trails in 36 sites distributed throughout a national forest in the Sierra Nevada, CA, USA. These low-cost, unobtrusive methods can be used by scientists and managers to detect anthropogenic noise events for many potential applications, including ecological research, transportation and recreation planning, and natural resource management.
In situ analysis of measurements of auroral dynamics and structure
NASA Astrophysics Data System (ADS)
Mella, Meghan R.
Two auroral sounding rocket case studies, one in the dayside and one in the nightside, explore aspects of poleward boundary aurora. The nightside sounding rocket, Cascades-2 was launched on 20 March 2009 at 11:04:00 UT from the Poker Flat Research Range in Alaska, and flew across a series of poleward boundary intensifications (PBIs). Each of the crossings have fundamentally different in situ electron energy and pitch angle structure, and different ground optics images of visible aurora. The different particle distributions show signatures of both a quasistatic acceleration mechanism and an Alfvenic acceleration mechanism, as well as combinations of both. The Cascades-2 experiment is the first sounding rocket observation of a PBI sequence, enabling a detailed investigation of the electron signatures and optical aurora associated with various stages of a PBI sequence as it evolves from an Alfvenic to a more quasistatic structure. The dayside sounding rocket, Scifer-2 was launched on 18 January 2008 at 7:30 UT from the Andoya Rocket Range in Andenes, Norway. It flew northward through the cleft region during a Poleward Moving Auroral Form (PMAF) event. Both the dayside and nightside flights observe dispersed, precipitating ions, each of a different nature. The dispersion signatures are dependent on, among other things, the MLT sector, altitude, source region, and precipitation mechanism. It is found that small changes in the shape of the dispersion have a large influence on whether the precipitation was localized or extended over a range of altitudes. It is also found that a single Maxwellian source will not replicate the data, but rather, a sum of Maxwellians of different temperature, similar to a Kappa distribution, most closely reproduces the data. The various particle signatures are used to argue that both events have similar magnetospheric drivers, that is, Bursty Bulk Flows in the magnetotail.
Multi-Hamiltonian structure of equations of hydrodynamic type
NASA Astrophysics Data System (ADS)
Gümral, H.; Nutku, Y.
1990-11-01
The discussion of the Hamiltonian structure of two-component equations of hydrodynamic type is completed by presenting the Hamiltonian operators for Euler's equation governing the motion of plane sound waves of finite amplitude and another quasilinear second-order wave equation. There exists a doubly infinite family of conserved Hamiltonians for the equations of gas dynamics that degenerate into one, namely, the Benney sequence, for shallow-water waves. Infinite sequences of conserved quantities for these equations are also presented. In the case of multicomponent equations of hydrodynamic type, it is shown, that Kodama's generalization of the shallow-water equations admits bi-Hamiltonian structure.
ERIC Educational Resources Information Center
Leiva, Alicia; Andrés, Pilar; Servera, Mateu; Verbruggen, Frederick; Parmentier, Fabrice B. R.
2016-01-01
Sounds deviating from an otherwise repeated or structured sequence capture attention and affect performance in an ongoing visual task negatively, testament to the balance between selective attention and change detection. Although deviance distraction has been the object of much research, its modulation across the life span has been more scarcely…
ERIC Educational Resources Information Center
Heath, Steve M.; Hogben, John H.
2004-01-01
Background: Claims that children with reading and oral language deficits have impaired perception of sequential sounds are usually based on psychophysical measures of auditory temporal processing (ATP) designed to characterise group performance. If we are to use these measures (e.g., the Tallal, 1980, Repetition Test) as the basis for intervention…
USDA-ARS?s Scientific Manuscript database
Genotyping-by-sequencing allows for large-scale genetic analyses in plant species with no reference genome, creating the challenge of sound inference in the presence of uncertain genotypes. Here we report an imputation-based genome-wide association study (GWAS) in reed canarygrass (Phalaris arundina...
Nonsymbolic, Approximate Arithmetic in Children: Abstract Addition Prior to Instruction
ERIC Educational Resources Information Center
Barth, Hilary; Beckmann, Lacey; Spelke, Elizabeth S.
2008-01-01
Do children draw upon abstract representations of number when they perform approximate arithmetic operations? In this study, kindergarten children viewed animations suggesting addition of a sequence of sounds to an array of dots, and they compared the sum to a second dot array that differed from the sum by 1 of 3 ratios. Children performed this…
Enhanced Passive and Active Processing of Syllables in Musician Children
ERIC Educational Resources Information Center
Chobert, Julie; Marie, Celine; Francois, Clement; Schon, Daniele; Besson, Mireille
2011-01-01
The aim of this study was to examine the influence of musical expertise in 9-year-old children on passive (as reflected by MMN) and active (as reflected by discrimination accuracy) processing of speech sounds. Musician and nonmusician children were presented with a sequence of syllables that included standards and deviants in vowel frequency,…
Congenital Amusia: A Short-Term Memory Deficit for Non-Verbal, but Not Verbal Sounds
ERIC Educational Resources Information Center
Tillmann, Barbara; Schulze, Katrin; Foxton, Jessica M.
2009-01-01
Congenital amusia refers to a lifelong disorder of music processing and is linked to pitch-processing deficits. The present study investigated congenital amusics' short-term memory for tones, musical timbres and words. Sequences of five events (tones, timbres or words) were presented in pairs and participants had to indicate whether the sequences…
Preschoolers Use Partial Letter Names to Select Spellings: Evidence from Portuguese
ERIC Educational Resources Information Center
Pollo, Tatiana Cury; Treiman, Rebecca; Kessler, Brett
2008-01-01
Two studies examined children's use of letter-name spelling strategies when target phoneme sequences match letter names with different degrees of precision. We examined Portuguese-speaking preschoolers' use of "h" (which is named /a'ga/ but which never represents those sounds) when spelling words beginning with /ga/ or variants of /ga/. We also…
Auditory short-term memory in the primate auditory cortex
Scott, Brian H.; Mishkin, Mortimer
2015-01-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581
A description of externally recorded womb sounds in human subjects during gestation
Daland, Robert; Kesavan, Kalpashri; Macey, Paul M.; Zeltzer, Lonnie; Harper, Ronald M.
2018-01-01
Objective Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Study design Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Results Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500–5,000 Hz) and mid-frequency (100–500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10–100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. Conclusions High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU. PMID:29746604
A description of externally recorded womb sounds in human subjects during gestation.
Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M
2018-01-01
Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU.
NASA Astrophysics Data System (ADS)
Pekar, S. F.; Hauptvogel, D.; Florindo, F.
2012-12-01
Litho- and sequence stratigraphic results from the ANDRILL Southern McMurdo Sound AND-2A Project indicate large variations in glacial conditions in the western Ross Sea, between the two isotopic Mi events (i.e., inferred glacioeustasy), Mi1b (17.8 Ma) and Mi2 (16.1 Ma). Most of this interval had not been previously recovered from the Antarctic continental margin providing the first opportunity to develop direct evidence on the evolution of the ice sheet during this time. During the 2007 austral spring/summer, the ANtarctic Geological DRILLing Program (ANDRILL) Southern McMurdo Sound (SMS) AND-2A drill hole cored 1138 meters of sediments, with ~98% recovery. The interval between 780 and 390 mbsf has high sedimentation rates (133-477 m/ my) and excellent age control, based on radiometric ages and magnetostratigraphy, providing an exceptional record of glacial advances and retreats deposited in a shallow water environment in Antarctica between 18 and 16 Ma. Approximately 34 sequences were identified, which contain bounding surfaces characterized by a pronounced shift in lithofacies, with typically more ice distal facies below and more proximal facies above. Lithofacies and grain size analysis suggest that these cycles are controlled by a combination of ice proximity and water depth. The timing of the sequence boundaries in the upper 300 meters are controlled by the obliquity cycle, with sequences in the lower 100 meters controlled by the precessional and eccentricity cycles. A surface at 774.94 mbsf contains a hiatus spanning 17.8-18.7 Ma, which encompasses the isotopic events Mi1b (17.8 Ma) and Mi1ab (18.3 Ma). This surface separates a prolonged interval of glacial advance over this site above, based on lithofacies and sediment deformation above and more ice distal environments below. A sharp surface at 398.25 mbsf (~16.2±0.2 Ma) interpreted to represent glacial advance to perhaps near or over the site, contains a possible short hiatus and is correlated to the Mi2 event. In contrast, between 400 and 645 mbsf, little evidence exists for subglacial grounding over the site, with sequence boundary formation generally controlled by local sea-level changes, with glacial processes being subdominant. This interval correlates to the early Miocene Climatic Optimum (17.3-16.3 Ma).
... or dizzy (more common with Meniere disease and acoustic neuromas ) Ringing or buzzing sound in the ears ( ... a long time Meniere disease Tumor, such as acoustic neuroma Use of certain medicines Working around loud ...
... but they are less common. The infection usually affects the middle ear and is called otitis media. ... become clogged with fluid and mucus. This can affect hearing, because sound cannot get through all that ...
Code of Federal Regulations, 2010 CFR
2010-10-01
... images, icons, labels, sounds, or incidental operating cues, comply with each of the following, assessed... premise equipment which is commonly used by individuals with disabilities to achieve access. (j) The term...
Code of Federal Regulations, 2014 CFR
2014-10-01
... images, icons, labels, sounds, or incidental operating cues, comply with each of the following, assessed... premise equipment which is commonly used by individuals with disabilities to achieve access. (j) The term...
Code of Federal Regulations, 2011 CFR
2011-10-01
... images, icons, labels, sounds, or incidental operating cues, comply with each of the following, assessed... premise equipment which is commonly used by individuals with disabilities to achieve access. (j) The term...
Code of Federal Regulations, 2012 CFR
2012-10-01
... images, icons, labels, sounds, or incidental operating cues, comply with each of the following, assessed... premise equipment which is commonly used by individuals with disabilities to achieve access. (j) The term...
Code of Federal Regulations, 2013 CFR
2013-10-01
... images, icons, labels, sounds, or incidental operating cues, comply with each of the following, assessed... premise equipment which is commonly used by individuals with disabilities to achieve access. (j) The term...
Silberzahn, Raphael; Uhlmann, Eric Luis
2013-12-01
In the field study reported here (N = 222,924), we found that Germans with noble-sounding surnames, such as Kaiser ("emperor"), König ("king"), and Fürst ("prince"), more frequently hold managerial positions than Germans with last names that either refer to common everyday occupations, such as Koch ("cook"), Bauer ("farmer"), and Becker/Bäcker ("baker"), or do not refer to any social role. This phenomenon occurs despite the fact that noble-sounding surnames never indicated that the person actually held a noble title. Because of basic properties of associative cognition, the status linked to a name may spill over to its bearer and influence his or her occupational outcomes.
[Tinnitus and psychiatric comorbidities].
Goebel, G
2015-04-01
Tinnitus is an auditory phantom phenomenon characterized by the sensation of sounds without objectively identifiable sound sources. To date, its causes are not well understood. The perceived severity of tinnitus correlates more closely to psychological and general health factors than to audiometric parameters. Together with limbic structures in the ventral striatum, the prefrontal cortex forms an internal "noise cancelling system", which normally helps to block out unpleasant sounds, including the tinnitus signal. If this pathway is compromised, chronic tinnitus results. Patients with chronic tinnitus show increased functional connectivity in corticolimbic pathways. Psychiatric comorbidities are common in patients who seek help for tinnitus or hyperacusis. Clinicians need valid screening tools in order to identify patients with psychiatric disorders and to tailor treatment in a multidisciplinary setting.
Lung sound intensity in patients with emphysema and in normal subjects at standardised airflows.
Schreur, H J; Sterk, P J; Vanderschoot, J; van Klink, H C; van Vollenhoven, E; Dijkman, J H
1992-01-01
BACKGROUND: A common auscultatory finding in pulmonary emphysema is a reduction of lung sounds. This might be due to a reduction in the generation of sounds due to the accompanying airflow limitation or to poor transmission of sounds due to destruction of parenchyma. Lung sound intensity was investigated in normal and emphysematous subjects in relation to airflow. METHODS: Eight normal men (45-63 years, FEV1 79-126% predicted) and nine men with severe emphysema (50-70 years, FEV1 14-63% predicted) participated in the study. Emphysema was diagnosed according to pulmonary history, results of lung function tests, and radiographic criteria. All subjects underwent phonopneumography during standardised breathing manoeuvres between 0.5 and 2 1 below total lung capacity with inspiratory and expiratory target airflows of 2 and 1 l/s respectively during 50 seconds. The synchronous measurements included airflow at the mouth and lung volume changes, and lung sounds at four locations on the right chest wall. For each microphone airflow dependent power spectra were computed by using fast Fourier transformation. Lung sound intensity was expressed as log power (in dB) at 200 Hz at inspiratory flow rates of 1 and 2 l/s and at an expiratory flow rate of 1 l/s. RESULTS: Lung sound intensity was well repeatable on two separate days, the intraclass correlation coefficient ranging from 0.77 to 0.94 between the four microphones. The intensity was strongly influenced by microphone location and airflow. There was, however, no significant difference in lung sound intensity at any flow rate between the normal and the emphysema group. CONCLUSION: Airflow standardised lung sound intensity does not differ between normal and emphysematous subjects. This suggests that the auscultatory finding of diminished breath sounds during the regular physical examination in patients with emphysema is due predominantly to airflow limitation. Images PMID:1440459
Beetz, M Jerome; Hechavarría, Julio C; Kössl, Manfred
2016-06-30
Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats' cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats' behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element.
Investigation of the sound generation mechanisms for in-duct orifice plates.
Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning
2017-08-01
Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.
Sound Symbolism in the Languages of Australia
Haynie, Hannah; Bowern, Claire; LaPalombara, Hannah
2014-01-01
The notion that linguistic forms and meanings are related only by convention and not by any direct relationship between sounds and semantic concepts is a foundational principle of modern linguistics. Though the principle generally holds across the lexicon, systematic exceptions have been identified. These “sound symbolic” forms have been identified in lexical items and linguistic processes in many individual languages. This paper examines sound symbolism in the languages of Australia. We conduct a statistical investigation of the evidence for several common patterns of sound symbolism, using data from a sample of 120 languages. The patterns examined here include the association of meanings denoting “smallness” or “nearness” with front vowels or palatal consonants, and the association of meanings denoting “largeness” or “distance” with back vowels or velar consonants. Our results provide evidence for the expected associations of vowels and consonants with meanings of “smallness” and “proximity” in Australian languages. However, the patterns uncovered in this region are more complicated than predicted. Several sound-meaning relationships are only significant for segments in prominent positions in the word, and the prevailing mapping between vowel quality and magnitude meaning cannot be characterized by a simple link between gradients of magnitude and vowel F2, contrary to the claims of previous studies. PMID:24752356
Using listening difficulty ratings of conditions for speech communication in rooms
NASA Astrophysics Data System (ADS)
Sato, Hiroshi; Bradley, John S.; Morimoto, Masayuki
2005-03-01
The use of listening difficulty ratings of speech communication in rooms is explored because, in common situations, word recognition scores do not discriminate well among conditions that are near to acceptable. In particular, the benefits of early reflections of speech sounds on listening difficulty were investigated and compared to the known benefits to word intelligibility scores. Listening tests were used to assess word intelligibility and perceived listening difficulty of speech in simulated sound fields. The experiments were conducted in three types of sound fields with constant levels of ambient noise: only direct sound, direct sound with early reflections, and direct sound with early reflections and reverberation. The results demonstrate that (1) listening difficulty can better discriminate among these conditions than can word recognition scores; (2) added early reflections increase the effective signal-to-noise ratio equivalent to the added energy in the conditions without reverberation; (3) the benefit of early reflections on difficulty scores is greater than expected from the simple increase in early arriving speech energy with reverberation; (4) word intelligibility tests are most appropriate for conditions with signal-to-noise (S/N) ratios less than 0 dBA, and where S/N is between 0 and 15-dBA S/N, listening difficulty is a more appropriate evaluation tool. .
Abbas, Ali
2012-06-01
Accurate Diagnosis of lung disease depends on understanding the sounds emanating from lung and its location. Lung sounds are of significance as they supply precise and important information on the health of the respiratory system. In addition, correct interpretation of breath sounds depends on a systematic approach to auscultation; it also requires the ability to describe the location of abnormal finding in relation to bony structures and anatomic landmark lines. Lungs consist of number of lobes; each lung lobe is further subdivided into smaller segments. These segments are attached to each other. Knowledge of the position of the lung segments is useful and important during the auscultation and diagnosis of the lung diseases. Usually the medical doctors give the location of the infection a segmental position reference. Breath sounds are auscultated over the anterior chest wall surface, the lateral chest wall surfaces, and posterior chest wall surface. Adventitious sounds from different location can be detected. It is common to seek confirmation of the sound detection and its location using invasive and potentially harmful imaging diagnosis techniques like x-rays. To overcome this limitation and for fast, reliable, accurate, and inexpensive diagnose a technique is developed in this research for identifying the location of infection through a computerized auscultation system.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Developing a Reference of Normal Lung Sounds in Healthy Peruvian Children
Ellington, Laura E.; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H.; Tielsch, James M.; Chavez, Miguel A.; Marin-Concha, Julio; Figueroa, Dante; West, James
2018-01-01
Purpose Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. Methods 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81 %) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Results Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47 % were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Conclusions Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments. PMID:24943262
Developing a reference of normal lung sounds in healthy Peruvian children.
Ellington, Laura E; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H; Tielsch, James M; Chavez, Miguel A; Marin-Concha, Julio; Figueroa, Dante; West, James; Checkley, William
2014-10-01
Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.
Lewis, Ralph S.; DiGiacomo-Cohen, Mary
2000-01-01
Most of the papers in this thematic section present regional perspectives that build on more than 100 years of geologic investigation in Long Island Sound. When viewed collectively, a common theme emerges in these works. The major geologic components of the Long Island Sound basin (bedrock, buried coastal-plain strata, recessional moraines, glacial-lake deposits, and the remains of a large marine delta) interact with the water body to affect the way the modern sedimentary system functions.Previous work, along with our present knowledge of the geologic framework of the Long Island Sound basin, is comprehensively reviewed with this theme in mind. Aspects of the crystalline bedrock, and the deltaic deposits associated with glacial Lake Connecticut, are examined with respect to their influence on sedimentation along the Connecticut coast and in the northern and western Sound. We also discuss the influence of the glacial drift that mantles the coastal-plain remnant along the north shore of Long Island and in the southern Sound.A total of approximately 22.7 billion m3 of marine sediment has accumulated in the Long Island Sound basin. A significant portion (44%) of the fine-grained marine section in the central and western basins was redistributed there from the eastern Sound, as tidal scour removed slightly over 5 billion m3 (5.3 X 1012 kg) of fine material from glacial lake and early-marine deposits east of the Connecticut River. The remainder of the estimated 1.2 X 1013 kg of fine-grained marine sediment that now resides in the central and western Sound can be accounted for by riverine input over the past 13.5 ka.
Minimizing noise in fiberglass aquaculture tanks: Noise reduction potential of various retrofits
Davidson, J.; Frankel, A.S.; Ellison, W.T.; Summerfelt, S.; Popper, A.N.; Mazik, P.; Bebak, J.
2007-01-01
Equipment used in intensive aquaculture systems, such as pumps and blowers can produce underwater sound levels and frequencies within the range of fish hearing. The impacts of underwater noise on fish are not well known, but limited research suggests that subjecting fish to noise could result in impairment of the auditory system, reduced growth rates, and increased stress. Consequently, reducing sound in fish tanks could result in advantages for cultured species and increased productivity for the aquaculture industry. The objective of this study was to evaluate the noise reduction potential of various retrofits to fiberglass fish culture tanks. The following structural changes were applied to tanks to reduce underwater noise: (1) inlet piping was suspended to avoid contact with the tank, (2) effluent piping was disconnected from a common drain line, (3) effluent piping was insulated beneath tanks, and (4) tanks were elevated on cement blocks and seated on insulated padding. Four combinations of the aforementioned structural changes were evaluated in duplicate and two tanks were left unchanged as controls. Control tanks had sound levels of 120.6 dB re 1 ??Pa. Each retrofit contributed to a reduction of underwater sound. As structural changes were combined, a cumulative reduction in sound level was observed. Tanks designed with a combination of retrofits had sound levels of 108.6 dB re 1 ??Pa, a four-fold reduction in sound pressure level. Sound frequency spectra indicated that the greatest sound reductions occurred between 2 and 100 Hz and demonstrated that nearby pumps and blowers created tonal frequencies that were transmitted into the tanks. The tank modifications used during this study were simple and inexpensive and could be applied to existing systems or considered when designing aquaculture facilities. ?? 2007 Elsevier B.V. All rights reserved.
Sub-Audible Speech Recognition Based upon Electromyographic Signals
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C. (Inventor); Agabon, Shane T. (Inventor); Lee, Diana D. (Inventor)
2012-01-01
Method and system for processing and identifying a sub-audible signal formed by a source of sub-audible sounds. Sequences of samples of sub-audible sound patterns ("SASPs") for known words/phrases in a selected database are received for overlapping time intervals, and Signal Processing Transforms ("SPTs") are formed for each sample, as part of a matrix of entry values. The matrix is decomposed into contiguous, non-overlapping two-dimensional cells of entries, and neural net analysis is applied to estimate reference sets of weight coefficients that provide sums with optimal matches to reference sets of values. The reference sets of weight coefficients are used to determine a correspondence between a new (unknown) word/phrase and a word/phrase in the database.
ERIC Educational Resources Information Center
Wuang, Y-P.; Su, C-Y.; Huang, M-H.
2012-01-01
Background: Deficit in motor performance is common in children with intellectual disabilities (ID). A motor function measure with sound psychometric properties is indispensable for clinical and research use. The purpose of this study was to compare the psychometric properties of three commonly used clinical measures for assessing motor function in…
ERIC Educational Resources Information Center
Sáez, Leilani; Irvin, P. Shawn; Alonzo, Julie; Tindal, Gerald
2012-01-01
In 2006, the easyCBM reading assessment system was developed to support the progress monitoring of phoneme segmenting, letter names and sounds recognition, word reading, passage reading fluency, and comprehension skill development in elementary schools. More recently, the Common Core Standards in English Language Arts have been introduced as a…
Music Learning in Schools: Perspectives of a New Foundation for Music Teaching and Learning
ERIC Educational Resources Information Center
Gruhn, Wilfried; Regelski, Thomas A., Ed.
2006-01-01
Does music education need a new philosophy that is scientifically grounded on common agreements with educational and musical standards? If such standards are commonly accepted, why do people reflect philosophically about music teaching and learning? At first glance, these questions sound very abstract and theoretical because people love music, and…
Efficacy of passive acoustic screening: implications for the design of imager and MR-suite.
Moelker, Adriaan; Vogel, Mika W; Pattynama, Peter M T
2003-02-01
To investigate the efficacy of passive acoustic screening in the magnetic resonance (MR) environment by reducing direct and indirect MR-related acoustic noise, both from the patient's and health worker's perspective. Direct acoustic noise refers to sound originating from the inner and outer shrouds of the MR imager, and indirect noise to acoustic reflections from the walls of the MR suite. Sound measurements were obtained inside the magnet bore (patient position) and at the entrance of the MR imager (health worker position). Inner and outer shrouds and walls were lined with thick layers of sound insulation to eliminate the direct and indirect acoustic pathways. Sound pressure levels (SPLs) and octave band frequencies were acquired during various MR imaging sequences at 1.5 T. Inside the magnet bore, direct acoustic noise radiating from the inner shroud was most relevant, with substantial reductions of up to 18.8 dB when using passive screening of the magnetic bore. At the magnet bore entrance, blocking acoustic noise from the outer shroud and reflections showed significant reductions of 4.5 and 2.8 dB, respectively, and 9.4 dB when simultaneously applied. Inner shroud coverage contributed minimally to the overall SPL reduction. Maximum noise reduction by passive acoustic screening can be achieved by reducing direct sound conduction through the inner and outer shrouds. Additional measures to optimize the acoustic properties of the MR suite have only little effect. Copyright 2003 Wiley-Liss, Inc.
Carlson, Matthew T
2018-04-01
Language-specific restrictions on sound sequences in words can lead to automatic perceptual repair of illicit sound sequences. As an example, no Spanish words begin with /s/-consonant sequences ([#sC]), and where necessary (e.g., foreign loanwords) [#sC] is repaired by inserting an initial [e], (e.g. foreign loanwords, cf., esnob, from English snob). As a result, Spanish speakers tend to perceive an illusory [e] before [#sC] sequences. Interestingly, this perceptual illusion is weaker in early Spanish-English bilinguals, whose other language, English, allows [#sC]. The present study explored whether this apparent influence of the English language on Spanish is restricted to early bilinguals, whose early language experience includes a mixture of both languages, or whether later learning of second language (L2) English can also induce a weakening of the first language (L1) perceptual illusion. Two groups of late Spanish-English bilinguals, immersed in Spanish or English, were tested on the same Spanish AX (same-different) discrimination task used in a study by Carlson et al., (2016) and their results compared with the Spanish monolinguals from Carlson et al.'s study. Like early bilinguals, late bilinguals exhibited a reduced impact of perceptual prothesis on discrimination accuracy. Additionally, late bilinguals, particularly in English immersion, were slowest when responding against the Spanish perceptual illusion. Robust L1 perceptual illusions thus appear to be malleable in the face of later L2 learning. It is argued that these results are consonant with the need for late bilinguals to navigate alternative, conflicting representations of the same acoustic material, even in unilingual L1 speech perception tasks.
Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.
2017-01-01
Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283
Contaminant distribution and accumulation in the surface sediments of Long Island Sound
Mecray, E.L.; Buchholtz ten Brink, Marilyn R.
2000-01-01
The distribution of contaminants in surface sediments has been measured and mapped as part of a U.S. Geological Survey study of the sediment quality and dynamics of Long Island Sound. Surface samples from 219 stations were analyzed for trace (Ag, Ba, Cd, Cr, Cu, Hg, Ni, Pb, V, Zn and Zr) and major (Al, Fe, Mn, Ca, and Ti) elements, grain size, and Clostridium perfringens spores. Principal Components Analysis was used to identify metals that may covary as a function of common sources or geochemistry. The metallic elements generally have higher concentrations in fine-grained deposits, and their transport and depositional patterns mimic those of small particles. Fine-grained particles are remobilized and transported from areas of high bottom energy and deposited in less dynamic regions of the Sound. Metal concentrations in bottom sediments are high in the western part of the Sound and low in the bottom-scoured regions of the eastern Sound. The sediment chemistry was compared to model results (Signell et al., 1998) and maps of sedimentary environments (Knebel et al., 1999) to better understand the processes responsible for contaminant distribution across the Sound. Metal concentrations were normalized to grain-size and the resulting ratios are uniform in the depositional basins of the Sound and show residual signals in the eastern end as well as in some local areas. The preferential transport of fine-grained material from regions of high bottom stress is probably the dominant factor controlling the metal concentrations in different regions of Long Island Sound. This physical redistribution has implications for environmental management in the region.
... my hearing? A ringing in the ears, called tinnitus, commonly occurs after noise exposure, and often becomes ... a ringing or other sound in your ear (tinnitus), which could be the result of long-term ...
2011-01-01
Background Common bean is an important legume crop with only a moderate number of short expressed sequence tags (ESTs) made with traditional methods. The goal of this research was to use full-length cDNA technology to develop ESTs that would overlap with the beginning of open reading frames and therefore be useful for gene annotation of genomic sequences. The library was also constructed to represent genes expressed under drought, low soil phosphorus and high soil aluminum toxicity. We also undertook comparisons of the full-length cDNA library to two previous non-full clone EST sets for common bean. Results Two full-length cDNA libraries were constructed: one for the drought tolerant Mesoamerican genotype BAT477 and the other one for the acid-soil tolerant Andean genotype G19833 which has been selected for genome sequencing. Plants were grown in three soil types using deep rooting cylinders subjected to drought and non-drought stress and tissues were collected from both roots and above ground parts. A total of 20,000 clones were selected robotically, half from each library. Then, nearly 10,000 clones from the G19833 library were sequenced with an average read length of 850 nucleotides. A total of 4,219 unigenes were identified consisting of 2,981 contigs and 1,238 singletons. These were functionally annotated with gene ontology terms and placed into KEGG pathways. Compared to other EST sequencing efforts in common bean, about half of the sequences were novel or represented the 5' ends of known genes. Conclusions The present full-length cDNA libraries add to the technological toolbox available for common bean and our sequencing of these clones substantially increases the number of unique EST sequences available for the common bean genome. All of this should be useful for both functional gene annotation, analysis of splice site variants and intron/exon boundary determination by comparison to soybean genes or with common bean whole-genome sequences. In addition the library has a large number of transcription factors and will be interesting for discovery and validation of drought or abiotic stress related genes in common bean. PMID:22118559
The Sound of the Microwave Background
NASA Astrophysics Data System (ADS)
Whittle, M.
2004-05-01
One of the most impressive developments in modern cosmology has been the measurement and analysis of the tiny fluctuations seen in the cosmic microwave background (CMB) radiation. When discussing these fluctuations, cosmologists frequently refer to their acoustic nature -- sound waves moving through the hot gas appear as peaks and troughs when they cross the surface of last scattering. As is now well known, recent observations quantify the amplitudes of these waves over several octaves, revealing a fundamental tone with several harmonics, whose relative strengths and pitches reveal important cosmological parameters, including global curvature. Not surprisingly, these results have wonderful pedagogical value in educating and inspiring both students and the general public. To further enhance this educational experience, I have attempted what might seem rather obvious, namely converting the CMB power spectrum into an audible sound. By raising the pitch some 50 octaves so that the fundamental falls at 200 Hz (matching its harmonic ``l" value), we hear the resulting sound as a loud hissing roar. Matching the progress in observational results has been an equally impressive development of the theoretical treatment of CMB fluctuations. Using available computer simulations (e.g. CMBFAST) it is possible to recreate the subtley different sounds generated by different kinds of universe (e.g. different curvature or baryon content). Pushing further, one can generate the ``true" sound, characterized by P(k), rather than the ``observed" sound, characterized by C(l). From P(k), we learn that the fundamental and harmonics are offset, yielding a chord somewhere between a major and minor third. A sequence of models also allows one to follow the growth of sound during the first megayear: a descending scream, changing into a deepening roar, with subsequent growing hiss; matching the increase in wavelength caused by universal expansion, followed by the post recombination flow of gas into the small scale potential wells created by dark matter. This final sound, of course, sets the stage for all subsequent growth of cosmic structure, from stars (hiss), through galaxies (mid-range tones), to large scale structure (bass notes). Although popular presentations of CMB studies already make use of many visual and conceptual aids, introducing sound into the pedagogical mix can significantly enhance both the intellectual and the emotional impact of the subject on its audience.
Edwards, Jan; Beckman, Mary E; Munson, Benjamin
2004-04-01
Adults' performance on a variety of tasks suggests that phonological processing of nonwords is grounded in generalizations about sublexical patterns over all known words. A small body of research suggests that children's phonological acquisition is similarly based on generalizations over the lexicon. To test this account, production accuracy and fluency were examined in nonword repetitions by 104 children and 22 adults. Stimuli were 22 pairs of nonwords, in which one nonword contained a low-frequency or unattested two-phoneme sequence and the other contained a high-frequency sequence. For a subset of these nonword pairs, segment durations were measured. The same sound was produced with a longer duration (less fluently) when it appeared in a low-frequency sequence, as compared to a high-frequency sequence. Low-frequency sequences were also repeated with lower accuracy than high-frequency sequences. Moreover, children with smaller vocabularies showed a larger influence of frequency on accuracy than children with larger vocabularies. Taken together, these results provide support for a model of phonological acquisition in which knowledge of sublexical units emerges from generalizations made over lexical items.
Basting, R T; Rodrigues Júnior, A L; Serra, M C
2001-01-01
This in situ study evaluated the microhardness of sound and demineralized enamel and dentin submitted to treatment with 10% carbamide peroxide for three weeks. A 10% carbamide peroxide bleaching agent--Opalescence/Ultradent (OPA)--was evaluated against a placebo agent (PLA). Two hundred and forty dental fragments--60 sound enamel fragments (SE), 60 demineralized enamel fragments (DE), 60 sound dentin fragments (SD) and 60 demineralized dentin fragments (DD)--were randomly fixed on the vestibular surface of the first superior molars and second superior premolars of 30 volunteers. The volunteers were divided into two groups that received bleaching or the placebo agent at different sequences and periods at a double blind 2 x 2 crossover study with a wash-out period of two weeks. Microhardness tests were performed on the enamel and dentin surface. The SE and DE submitted to treatment with OPA showed lower microhardness values than the SE and DE submitted to treatment with PLA. There were no statistical differences in microhardness values for SD and DD submitted to the treatment with OPA and PLA. The results suggest that treatment with 10% carbamide peroxide bleaching material for three weeks alters the enamel microhardness, although it does not seem to alter the dentin microhardness.
Enhanced auditory spatial localization in blind echolocators.
Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A
2015-01-01
Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mellow, Tim; Kärkkäinen, Leo
2014-03-01
An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
Finite element modelling of sound transmission from outer to inner ear.
Areias, Bruno; Santos, Carla; Natal Jorge, Renato M; Gentil, Fernanda; Parente, Marco Pl
2016-11-01
The ear is one of the most complex organs in the human body. Sound is a sequence of pressure waves, which propagates through a compressible media such as air. The pinna concentrates the sound waves into the external auditory meatus. In this canal, the sound is conducted to the tympanic membrane. The tympanic membrane transforms the pressure variations into mechanical displacements, which are then transmitted to the ossicles. The vibration of the stapes footplate creates pressure waves in the fluid inside the cochlea; these pressure waves stimulate the hair cells, generating electrical signals which are sent to the brain through the cochlear nerve, where they are decoded. In this work, a three-dimensional finite element model of the human ear is developed. The model incorporates the tympanic membrane, ossicular bones, part of temporal bone (external auditory meatus and tympanic cavity), middle ear ligaments and tendons, cochlear fluid, skin, ear cartilage, jaw and the air in external auditory meatus and tympanic cavity. Using the finite element method, the magnitude and the phase angle of the umbo and stapes footplate displacement are calculated. Two slightly different models are used: one model takes into consideration the presence of air in the external auditory meatus while the other does not. The middle ear sound transfer function is determined for a stimulus of 60 dB SPL, applied to the outer surface of the air in the external auditory meatus. The obtained results are compared with previously published data in the literature. This study highlights the importance of external auditory meatus in the sound transmission. The pressure gain is calculated for the external auditory meatus.
Comparison of muzzle suppression and ear-level hearing protection in firearm use.
Branch, Matthew Parker
2011-06-01
To compare noise reduction of commercially available ear-level hearing protection (muffs/inserts) to that of firearm muzzle suppressors. Experimental sound measurements under consistent environmental conditions. None. Muzzle suppressors for 2 pistol and 2 rifle calibers were tested using the Bruel & Kjaer 2209 sound meter and Bruel & Kjaer 4136 microphone calibrated with the Bruel & Kjaer Pistonphone using Military-Standard 1474D placement protocol. Five shots were recorded unsuppressed and 10 shots suppressed under consistent environmental conditions. Sound reduction was then compared with the real-world noise reduction rate of the best available ear-level protectors. All suppressors offered significantly greater noise reduction than ear-level protection, usually greater than 50% better. Noise reduction of all ear-level protectors is unable to reduce the impulse pressure below 140 dB for certain common firearms, an international standard for prevention of sensorineural hearing loss. Modern muzzle-level suppression is vastly superior to ear-level protection and the only available form of suppression capable of making certain sporting arms safe for hearing. The inadequacy of standard hearing protectors with certain common firearms is not recognized by most hearing professionals or their patients and should affect the way hearing professionals counsel patients and the public.
ERIC Educational Resources Information Center
West, Eva
2011-01-01
As a result of young people frequently exposing themselves to loud sounds, researchers are advocating education about the risks of contracting tinnitus. However, how pupils conceive of and learn about the biological aspects of hearing has not been extensively investigated. Consequently, the aim of the present study is to explore pupils' learning…
ERIC Educational Resources Information Center
Hughes, Robert W.; Vachon, Francois; Jones, Dylan M.
2007-01-01
The disruption of short-term memory by to-be-ignored auditory sequences (the changing-state effect) has often been characterized as attentional capture by deviant events (deviation effect). However, the present study demonstrates that changing-state and deviation effects are functionally distinct forms of auditory distraction: The disruption of…
1987-11-17
associated with stimulus intensities, sensory processes, encoding processes, perceptual mechanisms, memory systems, or response processes. Each possibility...has been proposed in the literature and the answer is not known. If SEs are due to a single mechanism, it is not stimulus intensity, a sensory ...on neural activities in the ear. Since the stimuli and the stimulus sequences were identical the ME and ME-with-feedback studies, sensory
The Influences of Progression Type and Distortion on the Perception of Terminal Power Chords
ERIC Educational Resources Information Center
Juchniewicz, Jay; Silverman, Michael J.
2013-01-01
The purpose of this study was to investigate the tonal perception and restoration of thirds within power chords with the instruments and sounds idiosyncratic to the Western rock/pop genre. Four separate chord sequences were performed on electric guitar in four versions; as full chord and power chord versions as well as under both clean-tone and…
ERIC Educational Resources Information Center
Motz, Benjamin A.; Erickson, Molly A.; Hetrick, William P.
2013-01-01
Humans perceive a wide range of temporal patterns, including those rhythms that occur in music, speech, and movement; however, there are constraints on the rhythmic patterns that we can represent. Past research has shown that sequences in which sounds occur regularly at non-metrical locations in a repeating beat period (non-integer ratio…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-05
... Group are seeking public input regarding possible approaches GSA may take in fulfilling its requirement... comprehensive and environmentally- sound approach to the certification of green Federal buildings. GSA is using... comments by one of the methods shown below on or before 60 days after publication in the Federal Register...
ERIC Educational Resources Information Center
MOAKLEY, FRANCIS X.
EFFECTS OF PERIODIC VARIATIONS IN AN INSTRUCTIONAL FILM'S NORMAL LOUDNESS LEVEL FOR RELEVANT AND IRRELEVANT FILM SEQUENCES WERE MEASURED BY A MULTIPLE CHOICE TEST. RIGOROUS PILOT STUDIES, RANDOM GROUPING OF SEVENTH GRADERS FOR TREATMENTS, AND RATINGS OF RELEVANT AND IRRELEVANT PORTIONS OF THE FILM BY AN UNSPECIFIED NUMBER OF JUDGES PRECEDED THE…
ERIC Educational Resources Information Center
HAWLEY, JANE STOUDER; JENKINSON, EDWARD B.
THE INDIANA UNIVERSITY ENGLISH CURRICULUM STUDY CENTER CREATED A SEQUENTIAL COURSE OF STUDY IN LITERATURE FOR GRADES SEVEN THROUGH NINE. A BASIC POETRY SEQUENCE, FOCUSING ON STUDENT RESPONSE TO POETRY, EMPHASIZES SOUND AND STORY IN GRADE SEVEN, IMAGE OR PICTURE IN GRADE EIGHT, AND METAPHOR AND TONE IN GRADE NINE. A COMPARATIVE STUDY OF THE DRAMA…
Music playing and memory trace: evidence from event-related potentials.
Kamiyama, Keiko; Katahira, Kentaro; Abla, Dilshat; Hori, Koji; Okanoya, Kazuo
2010-08-01
We examined the relationship between motor practice and auditory memory for sound sequences to evaluate the hypothesis that practice involving physical performance might enhance auditory memory. Participants learned two unfamiliar sound sequences using different training methods. Under the key-press condition, they learned a melody while pressing a key during auditory input. Under the no-key-press condition, they listened to another melody without any key pressing. The two melodies were presented alternately, and all participants were trained in both methods. Participants were instructed to pay attention under both conditions. After training, they listened to the two melodies again without pressing keys, and ERPs were recorded. During the ERP recordings, 10% of the tones in these melodies deviated from the originals. The grand-average ERPs showed that the amplitude of mismatch negativity (MMN) elicited by deviant stimuli was larger under the key-press condition than under the no-key-press condition. This effect appeared only in the high absolute pitch group, which included those with a pronounced ability to identify a note without external reference. This result suggests that the effect of training with key pressing was mediated by individual musical skills. Copyright 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Linking sounds to meanings: infant statistical learning in a natural language.
Hay, Jessica F; Pelucchi, Bruna; Graf Estes, Katharine; Saffran, Jenny R
2011-09-01
The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants' subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants' prior experience with the distribution of sounds that make up words in natural languages. Copyright © 2011 Elsevier Inc. All rights reserved.
Noise detection in heart sound recordings.
Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L
2011-01-01
Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.
Midbrain adaptation may set the stage for the perception of musical beat
2017-01-01
The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat. PMID:29118141
A differentially amplified motion in the ear for near-threshold sound detection
Chen, Fangyi; Zha, Dingjun; Fridberger, Anders; Zheng, Jiefu; Choudhury, Niloy; Jacques, Steven L.; Wang, Ruikang K.; Shi, Xiaorui; Nuttall, Alfred L.
2011-01-01
The ear is a remarkably sensitive pressure fluctuation detector. In guinea pigs, behavioral measurements indicate a minimum detectable sound pressure of ~20 μPa at 16 kHz. Such faint sounds produce 0.1 nm basilar membrane displacements, a distance smaller than conformational transitions in ion channels. It seems that noise within the auditory system would swamp such tiny motions, making weak sounds imperceptible. Here, a new mechanism contributing to a resolution of this problem is proposed and validated through direct measurement. We hypothesize that vibration at the apical end of hair cells is enhanced compared to the commonly measured basilar membrane side. Using in vivo optical coherence tomography, we demonstrated that apical-side vibrations peak at a higher frequency, had different timing, and were enhanced compared to the basilar membrane. These effects depend nonlinearly on the stimulus level. The timing difference and enhancement are important for explaining how the noise problem is circumvented. PMID:21602821
NASA Technical Reports Server (NTRS)
Zuev, V. E.; Kostin, B. S.; Naats, I. E.
1986-01-01
The methods of multifrequency laser sounding (MLS) are the most effective remote methods for investigating the atmospheric aerosols, since it is possible to obtain complete information on aerosol microstructure and the effective methods for estimating the aerosol optical constants can be developed. The MLS data interpretation consists in the solution of the set of equations containing those of laser sounding and equations for polydispersed optical characteristics. As a rule, the laser sounding equation is written in the approximation of single scattering and the equations for optical characteristics are written assuming that the atmospheric aerosol is formed by spherical and homogeneous particles. To remove the indeterminacy of equations, the method of optical sounding of atmospheric aerosol, consisting in a joint use of a mutifrequency lidar and a spectral photometer in common geometrical scheme of the optical experiment was suggested. The method is used for investigating aerosols in the cases when absorption by particles is small and indicates the minimum necessary for interpretation of a series of measurements.
Lenggenhager, Bigna; Azevedo, Ruben T; Mancini, Alessandra; Aglioti, Salvatore Maria
2013-10-01
The ultimatum game (UG) is commonly used to study the tension between financial self-interest and social equity motives. Here, we investigated whether experimental exposure to interoceptive signals influences participants' behavior in the UG. Participants were presented with various bodily sounds--i.e., their own heart, another person's heart, or the sound of footsteps--while acting both in the role of responder and proposer. We found that listening to one's own heart sound, compared to the other bodily sounds: (1) increased subjective feelings of unfairness, but not rejection behavior, in response to unfair offers and (2) increased the unfair offers while playing in the proposer role. These findings suggest that heightened feedback of one's own visceral processes may increase a self-centered perspective and drive socioeconomic exchanges accordingly. In addition, this study introduces a valuable procedure to manipulate online the access to interoceptive signals and for exploring the interplay between viscero-sensory information and cognition.
Reilly, Jamie; Garcia, Amanda; Binney, Richard J.
2016-01-01
Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210
The effect on the transmission loss of a double wall panel of using helium gas in the gap
NASA Astrophysics Data System (ADS)
Atwal, M. S.; Crocker, M. J.
The possibility of increasing the sound-power transmission loss of a double panel by using helium gas in the gap is investigated. The transmission loss of a panel is defined as ten times the common logarithm of the ratio of the sound power incident on the panel to the sound power transmitted to the space on the other side of the panel. The work is associated with extensive research being done to develop new techniques for predicting the interior noise levels on board high-speed advanced turboprop aircraft and reducing the noise levels with a minimum weight penalty. Helium gas was chosen for its inert properties and its low impedance compared with air. With helium in the gap, the impedance mismatch experienced by the sound wave will be greater than that with air in the gap. It is seen that helium gas in the gap increases the transmission loss of the double panel over a wide range of frequencies.
The effect on the transmission loss of a double wall panel of using helium gas in the gap
NASA Technical Reports Server (NTRS)
Atwal, M. S.; Crocker, M. J.
1985-01-01
The possibility of increasing the sound-power transmission loss of a double panel by using helium gas in the gap is investigated. The transmission loss of a panel is defined as ten times the common logarithm of the ratio of the sound power incident on the panel to the sound power transmitted to the space on the other side of the panel. The work is associated with extensive research being done to develop new techniques for predicting the interior noise levels on board high-speed advanced turboprop aircraft and reducing the noise levels with a minimum weight penalty. Helium gas was chosen for its inert properties and its low impedance compared with air. With helium in the gap, the impedance mismatch experienced by the sound wave will be greater than that with air in the gap. It is seen that helium gas in the gap increases the transmission loss of the double panel over a wide range of frequencies.
NASA Astrophysics Data System (ADS)
Zhang, Zhiwang; Wei, Qi; Cheng, Ying; Zhang, Ting; Wu, Dajian; Liu, Xiaojun
2017-02-01
The discovery of topological acoustics has revolutionized fundamental concepts of sound propagation, giving rise to strikingly unconventional acoustic edge modes immune to scattering. Because of the spinless nature of sound, the "spinlike" degree of freedom crucial to topological states in acoustic systems is commonly realized with circulating background flow or preset coupled resonator ring waveguides, which drastically increases the engineering complexity. Here we realize the acoustic pseudospin multipolar states in a simple flow-free symmetry-broken metamaterial lattice, where the clockwise (anticlockwise) sound propagation within each metamolecule emulates pseudospin down (pseudospin up). We demonstrate that tuning the strength of intermolecular coupling by simply contracting or expanding the metamolecule can induce the band inversion effect between the pseudospin dipole and quadrupole, which leads to a topological phase transition. Topologically protected edge states and reconfigurable topological one-way transmission for sound are further demonstrated. These results provide diverse routes to construct novel acoustic topological insulators with versatile applications.
Midbrain adaptation may set the stage for the perception of musical beat.
Rajendran, Vani G; Harper, Nicol S; Garcia-Lazaro, Jose A; Lesica, Nicholas A; Schnupp, Jan W H
2017-11-15
The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat. © 2017 The Authors.
Auditory event perception: the source-perception loop for posture in human gait.
Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J
2008-01-01
There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.
Energy Flux in the Cochlea: Evidence Against Power Amplification of the Traveling Wave.
van der Heijden, Marcel; Versteegh, Corstiaen P C
2015-10-01
Traveling waves in the inner ear exhibit an amplitude peak that shifts with frequency. The peaking is commonly believed to rely on motile processes that amplify the wave by inserting energy. We recorded the vibrations at adjacent positions on the basilar membrane in sensitive gerbil cochleae and tested the putative power amplification in two ways. First, we determined the energy flux of the traveling wave at its peak and compared it to the acoustic power entering the ear, thereby obtaining the net cochlear power gain. For soft sounds, the energy flux at the peak was 1 ± 0.6 dB less than the middle ear input power. For more intense sounds, increasingly smaller fractions of the acoustic power actually reached the peak region. Thus, we found no net power amplification of soft sounds and a strong net attenuation of intense sounds. Second, we analyzed local wave propagation on the basilar membrane. We found that the waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach. Thus, we found no local power amplification of soft sounds and strong local attenuation of intense sounds. The most parsimonious interpretation of these findings is that cochlear sensitivity is not realized by amplifying acoustic energy, but by spatially focusing it, and that dynamic compression is realized by adjusting the amount of dissipation to sound intensity.
Kuriki, Shinya; Yokosawa, Koichi; Takahashi, Makoto
2013-01-01
The auditory illusory perception “scale illusion” occurs when a tone of ascending scale is presented in one ear, a tone of descending scale is presented simultaneously in the other ear, and vice versa. Most listeners hear illusory percepts of smooth pitch contours of the higher half of the scale in the right ear and the lower half in the left ear. Little is known about neural processes underlying the scale illusion. In this magnetoencephalographic study, we recorded steady-state responses to amplitude-modulated short tones having illusion-inducing pitch sequences, where the sound level of the modulated tones was manipulated to decrease monotonically with increase in pitch. The steady-state responses were decomposed into right- and left-sound components by means of separate modulation frequencies. It was found that the time course of the magnitude of response components of illusion-perceiving listeners was significantly correlated with smooth pitch contour of illusory percepts and that the time course of response components of stimulus-perceiving listeners was significantly correlated with discontinuous pitch contour of stimulus percepts in addition to the contour of illusory percepts. The results suggest that the percept of illusory pitch sequence was represented in the neural activity in or near the primary auditory cortex, i.e., the site of generation of auditory steady-state response, and that perception of scale illusion is maintained by automatic low-level processing. PMID:24086676
Alejo, E A
1994-03-01
The Institute for Social Studies and Action subjected four Filipino women and three Filipino men aged 20-28 years to the viewing of an animated film about a newly married couple with economic problems and their unpreparedness to face modern life. The film dealt with how to improve partnerships through communication and compromise, touching upon labor division, women's development, and family planning. Viewers subsequently participated in a focus group discussion. The group consisted of two male third-year engineering students and one male graduating accounting student. Among females, there was a government employee, a factory worker, a nurse, and an elementary school teacher. Participants generally understood central themes of the film and enjoyed the viewing. Two people were, however, confused by the sequencing of scenes and the graphical representation of characters. Moreover, most disliked the distorted physical features of the characters, the dark and dull background, and irrecognizable sound effects. The group expressed concern that the implications of the film would not be understood by the primary target audience, common people, and recommended it for only people over age 16 years. They noted, however, that the film could be modified to suit younger audiences.
Incorporating evolution of transcription factor binding sites into annotated alignments.
Bais, Abha S; Grossmann, Stefen; Vingron, Martin
2007-08-01
Identifying transcription factor binding sites (TFBSs) is essential to elucidate putative regulatory mechanisms. A common strategy is to combine cross-species conservation with single sequence TFBS annotation to yield "conserved TFBSs". Most current methods in this field adopt a multi-step approach that segregates the two aspects. Again, it is widely accepted that the evolutionary dynamics of binding sites differ from those of the surrounding sequence. Hence, it is desirable to have an approach that explicitly takes this factor into account. Although a plethora of approaches have been proposed for the prediction of conserved TFBSs, very few explicitly model TFBS evolutionary properties, while additionally being multi-step. Recently, we introduced a novel approach to simultaneously align and annotate conserved TFBSs in a pair of sequences. Building upon the standard Smith-Waterman algorithm for local alignments, SimAnn introduces additional states for profiles to output extended alignments or annotated alignments. That is, alignments with parts annotated as gaplessly aligned TFBSs (pair-profile hits)are generated. Moreover,the pair- profile related parameters are derived in a sound statistical framework. In this article, we extend this approach to explicitly incorporate evolution of binding sites in the SimAnn framework. We demonstrate the extension in the theoretical derivations through two position-specific evolutionary models, previously used for modelling TFBS evolution. In a simulated setting, we provide a proof of concept that the approach works given the underlying assumptions,as compared to the original work. Finally, using a real dataset of experimentally verified binding sites in human-mouse sequence pairs,we compare the new approach (eSimAnn) to an existing multi-step tool that also considers TFBS evolution. Although it is widely accepted that binding sites evolve differently from the surrounding sequences, most comparative TFBS identification methods do not explicitly consider this.Additionally, prediction of conserved binding sites is carried out in a multi-step approach that segregates alignment from TFBS annotation. In this paper, we demonstrate how the simultaneous alignment and annotation approach of SimAnn can be further extended to incorporate TFBS evolutionary relationships. We study how alignments and binding site predictions interplay at varying evolutionary distances and for various profile qualities.
Use of Acoustic Emission and Pattern Recognition for Crack Detection of a Large Carbide Anvil
Chen, Bin; Wang, Yanan; Yan, Zhaoli
2018-01-01
Large-volume cubic high-pressure apparatus is commonly used to produce synthetic diamond. Due to the high pressure, high temperature and alternative stresses in practical production, cracks often occur in the carbide anvil, thereby resulting in significant economic losses or even casualties. Conventional methods are unsuitable for crack detection of the carbide anvil. This paper is concerned with acoustic emission-based crack detection of carbide anvils, regarded as a pattern recognition problem; this is achieved using a microphone, with methods including sound pulse detection, feature extraction, feature optimization and classifier design. Through analyzing the characteristics of background noise, the cracked sound pulses are separated accurately from the originally continuous signal. Subsequently, three different kinds of features including a zero-crossing rate, sound pressure levels, and linear prediction cepstrum coefficients are presented for characterizing the cracked sound pulses. The original high-dimensional features are adaptively optimized using principal component analysis. A hybrid framework of a support vector machine with k nearest neighbors is designed to recognize the cracked sound pulses. Finally, experiments are conducted in a practical diamond workshop to validate the feasibility and efficiency of the proposed method. PMID:29382144
Use of Acoustic Emission and Pattern Recognition for Crack Detection of a Large Carbide Anvil.
Chen, Bin; Wang, Yanan; Yan, Zhaoli
2018-01-29
Large-volume cubic high-pressure apparatus is commonly used to produce synthetic diamond. Due to the high pressure, high temperature and alternative stresses in practical production, cracks often occur in the carbide anvil, thereby resulting in significant economic losses or even casualties. Conventional methods are unsuitable for crack detection of the carbide anvil. This paper is concerned with acoustic emission-based crack detection of carbide anvils, regarded as a pattern recognition problem; this is achieved using a microphone, with methods including sound pulse detection, feature extraction, feature optimization and classifier design. Through analyzing the characteristics of background noise, the cracked sound pulses are separated accurately from the originally continuous signal. Subsequently, three different kinds of features including a zero-crossing rate, sound pressure levels, and linear prediction cepstrum coefficients are presented for characterizing the cracked sound pulses. The original high-dimensional features are adaptively optimized using principal component analysis. A hybrid framework of a support vector machine with k nearest neighbors is designed to recognize the cracked sound pulses. Finally, experiments are conducted in a practical diamond workshop to validate the feasibility and efficiency of the proposed method.
Potential uses of vacuum bubbles in noise and vibration control
NASA Technical Reports Server (NTRS)
Ver, Istvan L.
1989-01-01
Vacuum bubbles are new acoustic elements which are dynamically more compliant than the gas volume they replace, but which are statically robust. They are made of a thin metallic shell with vacuum in their cavity. Consequently, they pose no danger in terms of contamination or fire hazard. The potential of the vacuum bubble concept for noise and vibration control was assessed with special emphases on spacecraft and aircraft applications. The following potential uses were identified: (1) as a cladding, to reduce sound radiation of vibrating surfaces and the sound excitation of structures, (2) as a screen, to reflect or absorb an incident sound wave, and (3) as a liner, to increase low frequency sound transmission loss of double walls and to increase the low frequency sound attenuation of muffler baffles. It was found that geometric and material parameters must be controlled to a very high accuracy to obtain optimal performance and that performance is highly sensitive to variations in static pressure. Consequently, it was concluded that vacuum bubbles have more potential in spacecraft applications where static pressure is controlled more than in aircraft applications where large fluctuations in static pressure are common.
Impact of low-frequency sound on historic structures
NASA Astrophysics Data System (ADS)
Sutherland, Louis C.; Horonjeff, Richard D.
2005-09-01
In common usage, the term soundscape usually refers to portions of the sound spectrum audible to human observers, and perhaps more broadly other members of the animal kingdom. There is, however, a soundscape regime at the low end of the frequency spectrum (e.g., 10-25 Hz), which is inaudible to humans, where nonindigenous sound energy may cause noise-induced vibrations in structures. Such low frequency components may be of sufficient magnitude to pose damage risk potential to historic structures and cultural resources. Examples include Anasazi cliff and cave dwellings, and pueblo structures of vega type roof construction. Both are susceptible to noise induced vibration from low-frequency sound pressures that excite resonant frequencies in these structures. The initial damage mechanism is usually fatigue cracking. Many mechanisms are subtle, temporally multiphased, and not initially evident to the naked eye. This paper reviews the types of sources posing the greatest potential threat, their low-frequency spectral characteristics, typical structural responses, and the damage risk mechanisms involved. Measured sound and vibration levels, case history studies, and conditions favorable to damage risk are presented. The paper concludes with recommendations for increasing the damage risk knowledge base to better protect these resources.
Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments
NASA Astrophysics Data System (ADS)
Horowitz, Seth S.; Simmons, Andrea M.; Blue, China
2005-09-01
Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.
The meaning of city noises: Investigating sound quality in Paris (France)
NASA Astrophysics Data System (ADS)
Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie
2004-05-01
The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.
Jambrošić, Kristian; Horvat, Marko; Domitrović, Hrvoje
2013-07-01
Urban soundscapes at five locations in the city of Zadar were perceptually assessed by on-site surveys and objectively evaluated based on monaural and binaural recordings. All locations were chosen so that they would display auditory and visual diversity as much as possible. The unique sound installation known as the Sea Organ was included as an atypical music-like environment. Typical objective parameters were calculated from the recordings related to the amount of acoustic energy, spectral properties of sound, the amount of fluctuations, and tonal properties. The subjective assessment was done on-site using a common survey for evaluating the properties of sound and visual environment. The results revealed the importance of introducing the context into soundscape research because objective parameters did not show significant correlation with responses obtained from interviewees. Excessive values of certain objective parameters could indicate that a sound environment will be perceived as unpleasant or annoying, but its overall perception depends on how well it agrees with people's expectations. This was clearly seen for the case of Sea Organ for which the highest values of objective parameters were obtained, but, at the same time, it was evaluated as the most positive sound environment in every aspect.
Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System
Anderson, Lucy A.
2016-01-01
High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621
How do tympanic-membrane perforations affect human middle-ear sound transmission?
Voss, S E; Rosowski, J J; Merchant, S N; Peake, W T
2001-01-01
Although tympanic-membrane (TM) perforations are common sequelae of middle-ear disease, the hearing losses they cause have not been accurately determined, largely because additional pathological conditions occur in these ears. Our measurements of acoustic transmission before and after making controlled perforations in cadaver ears show that perforations cause frequency-dependent loss that: (1) is largest at low frequencies; (2) increases as perforation size increases; and (3) does not depend on perforation location. The dominant loss mechanism is the reduction in sound-pressure difference across the TM. Measurements of middle-ear air-space sound pressures show that transmission via direct acoustic stimulation of the oval and round windows is generally negligible. A quantitative model predicts the influence of middle-ear air-space volume on loss; with larger volumes, loss is smaller.
Basic physics of ultrasound imaging.
Aldrich, John E
2007-05-01
The appearance of ultrasound images depends critically on the physical interactions of sound with the tissues in the body. The basic principles of ultrasound imaging and the physical reasons for many common artifacts are described.
International E-Waste Management Network (IEMN)
EPA and the Environmental Protection Administration Taiwan (EPAT) have collaborated since 2011 to build global capacity for the environmentally sound management of waste electrical and electronic equipment (WEEE), which is commonly called e-waste.
Flight Performance Evaluation of Three GPS Receivers for Sounding Rocket Tracking
NASA Technical Reports Server (NTRS)
Bull, Barton; Diehl, James; Montenbruck, Oliver; Markgraf, Markus; Bauer, Frank (Technical Monitor)
2001-01-01
In preparation for the European Space Agency Maxus-4 mission, a sounding rocket test flight was carried out at Esrange,, near Kiruna, Sweden on February 19, 2001 to validate existing ground facilities and range safety installations. Due to the absence of a dedicated scientific payload, the flight offered the opportunity to test multiple GPS receivers and assess their performance for the tracking of sounding rockets. The receivers included an Ashtech G12 HDMA receiver, a BAE (Canadian Marconi) Allstar receiver and a Mitel Orion receiver. All of them provide CIA code tracking on the L1 frequency to determine the user position and make use of Doppler measurements to derive the instantaneous velocity. Among the receivers, the G12 has been optimized for use under highly dynamic conditions and has earlier been flown successfully on NASA sounding rockets [Bull, ION-GPS-2000]. The Allstar is representative of common single frequency receivers for terrestrial applications and received no particular modification, except for the disabling of the common altitude and velocity constraints that would otherwise inhibit its use for space application. The Orion receiver, finally, employs the same Mitel chipset as the Allstar, but has received various firmware modifications by DLR to safeguard it against signal losses and improve its tracking performance [Montenbruck et al., ION-GPS-2000]. While the two NASA receivers were driven by a common wrap-around antenna, the DLR experiment made use of a switchable antenna system comprising a helical antenna in the tip of the rocket and two blade antennas attached to the body of the vehicle. During the boost a peak acceleration of roughly 17g's was achieved which resulted in a velocity of about 1100 m/s at the end of the burn. At apogee, the rocket reached a maximum altitude of over 80 km. A detailed analysis of the attained flight data will be given in the paper together with a evaluation of different receiver designs and antenna concepts.
Flight Performance Evaluation of Three GPS Receivers for Sounding Rocket Tracking
NASA Technical Reports Server (NTRS)
Bull, Barton; Diehl, James; Montenbruck, Oliver; Markgraf, Markus; Bauer, Frank (Technical Monitor)
2002-01-01
In preparation for the European Space Agency Maxus-4 mission, a sounding rocket test flight was carried out at Esrange, near Kiruna, Sweden on February 19, 2001 to validate existing ground facilities and range safety installations. Due to the absence of a dedicated scientific payload, the flight offered the opportunity to test multiple GPS receivers and assess their performance for the tracking of sounding rockets. The receivers included an Ashtech G12 HDMA receiver, a BAE (Canadian Marconi) Allstar receiver and a Mitel Orion receiver. All of them provide C/A code tracking on the L1 frequency to determine the user position and make use of Doppler measurements to derive the instantaneous velocity. Among the receivers, the G12 has been optimized for use under highly dynamic conditions and has earlier been flown successfully on NASA sounding rockets. The Allstar is representative of common single frequency receivers for terrestrial applications and received no particular modification, except for the disabling of the common altitude and velocity constraints that would otherwise inhibit its use for space application. The Orion receiver, finally, employs the same Mitel chipset as the Allstar, but has received various firmware modifications by DLR to safeguard it against signal losses and improve its tracking performance. While the two NASA receivers were driven by a common wrap-around antenna, the DLR experiment made use of a switchable antenna system comprising a helical antenna in the tip of the rocket and two blade antennas attached to the body of the vehicle. During the boost a peak acceleration of roughly l7g's was achieved which resulted in a velocity of about 1100 m/s at the end of the burn. At apogee, the rocket reached an altitude of over 80 km. A detailed analysis of the attained flight data is given together with a evaluation of different receiver designs and antenna concepts.
NASA Astrophysics Data System (ADS)
Aying, K. P.; Otadoy, R. E.; Violanda, R.
2015-06-01
This study investigates on the sound pressure level (SPL) of insert-type earphones that are commonly used for music listening of the general populace. Measurements of SPL from earphones of different respondents were measured by plugging the earphone to a physical ear canal model. Durations of the earphone used for music listening were also gathered through short interviews. Results show that 21% of the respondents exceed the standard loudness/duration relation recommended by the World Health Organization (WHO).
Noise from Two-Blade Propellers
NASA Technical Reports Server (NTRS)
Stowell, E Z; Deming, A F
1936-01-01
The two-blade propeller, one of the most powerful sources of sound known, has been studied with the view of obtaining fundamental information concerning the noise emission. In order to eliminate engine noise, the propeller was mounted on an electric motor. A microphone was used to pick up the sound whose characteristics were studied electrically. The distribution of noise throughout the frequency range, as well as the spatial distribution about the propeller, was studied. The results are given in the form of polar diagrams. An appendix of common acoustical terms is included.
Deconvolution of magnetic acoustic change complex (mACC).
Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W
2014-11-01
The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
Hawkmoths produce anti-bat ultrasound
Barber, Jesse R.; Kawahara, Akito Y.
2013-01-01
Bats and moths have been engaged in aerial warfare for nearly 65 Myr. This arms race has produced a suite of counter-adaptations in moths, including bat-detecting ears. One set of defensive strategies involves the active production of sound; tiger moths' ultrasonic replies to bat attack have been shown to startle bats, warn the predators of bad taste and jam their biosonar. Here, we report that hawkmoths in the Choerocampina produce entirely ultrasonic sounds in response to tactile stimulation and the playback of biosonar attack sequences. Males do so by grating modified scraper scales on the outer surface of the genital valves against the inner margin of the last abdominal tergum. Preliminary data indicate that females also produce ultrasound to touch and playback of echolocation attack, but they do so with an entirely different mechanism. The anti-bat function of these sounds is unknown but might include startling, cross-family acoustic mimicry, warning of unprofitability or physical defence and/or jamming of echolocation. Hawkmoths present a novel and tractable system to study both the function and evolution of anti-bat defences. PMID:23825084
Miller, Patrick J O
2006-05-01
Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.
Structure of the Bacterial Community in Different Stages of Early Childhood Caries.
Ximenes, Marcos; Armas, Rafael Dutra de; Triches, Thaisa Cezária; Cardoso, Mariane; Vieira, Ricardo de Souza
2018-01-15
To characterise in vivo the structure of bacterial communities in decayed and sound primary teeth. Samples of biofilms were collected from three groups of patients with complete and exclusively primary dentition (n = 45): G1: sound teeth (n = 15); G2: enamel lesion (n = 15); G3: dentin lesion (n = 15). DNA was extracted (CTAB 2%) from the biofilm, the partial 16S rRNA gene was amplified with Bacteria Universal Primers (BA338fGC - UN518r) and subjected to DGGE (denaturing gradient gel electrophoresis). Multidimensional scaling and ANOSIM (analysis of similarity) were employed to determine the structure of the bacterial communities. The amplicon richness was determined by averaging amplicons, with the differences between treatments determined with ANOVA, while means were compared using Tukey's test (p < 0.05). Compared to sound teeth, a greater variety of bacterial communities was found in decayed teeth. Despite the differences between the bacterial communities of sound teeth and decayed teeth, the Venn diagram showed that the samples had 38 amplicons in common. Greater amplicon richness was observed in samples of decayed teeth (enamel: 20.5 ± 2.7; dentin: 20.1 ± 2.8) compared with the sound samples (12.0 ± 4.3) (p <0.05), indicating enhanced growth for specific groups of bacteria on decayed teeth. Although there is less bacterial diversity on sound than ECC-decayed teeth, the bacterial communities are very similar.
Lewis, James W.; Frum, Chris; Brefczynski-Lewis, Julie A.; Talkington, William J.; Walker, Nathan A.; Rapuano, Kristina M.; Kovach, Amanda L.
2012-01-01
Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, while the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when attempting to recognize action sounds. PMID:21305666
Steinschneider, Mitchell; Micheyl, Christophe
2014-01-01
The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate “auditory objects” with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the “object-related negativity” recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch. PMID:25209282
Adachi, Satoshi; Nakano, Hiroshi; Odajima, Hiroshi; Motomura, Chikako; Yoshioka, Yukiko
2016-01-01
Background Chest auscultation is commonly performed during respiratory physical therapy (RPT). However, the changes in breath sounds in children with atelectasis have not been previously reported. The aim of this study was to clarify the characteristics of breath sounds in children with atelectasis using acoustic measurements. Method The subjects of this study were 13 children with right middle lobe atelectasis (3–7 years) and 14 healthy children (3–7 years). Lung sounds at the bilateral fifth intercostal spaces on the midclavicular line were recorded. The right-to-left ratio (R/L ratio) and the expiration to inspiration ratio (E/I ratio) of the breath sound sound pressure were calculated separately for three octave bands (100–200 Hz, 200–400 Hz, and 400–800 Hz). These data were then compared between the atelectasis and control groups. In addition, the same measurements were repeated after treatment, including RPT, in the atelectasis group. Result Before treatment, the inspiratory R/L ratios for all the frequency bands were significantly lower in the atelectasis group than in the control group, and the E/I ratios for all the frequency bands were significantly higher in the atelectasis group than in the control group. After treatment, the inspiratory R/L ratios of the atelectasis group did not increase significantly, but the E/I ratios decreased for all the frequency bands and became similar to those of the control group. Conclusion Breath sound attenuation in the atelectatic area remained unchanged even after radiographical resolution, suggesting a continued decrease in local ventilation. On the other hand, the elevated E/I ratio for the atelectatic area was normalized after treatment. Therefore, the differences between inspiratory and expiration sound intensities may be an important marker of atelectatic improvement in children. PMID:27611433
Adachi, Satoshi; Nakano, Hiroshi; Odajima, Hiroshi; Motomura, Chikako; Yoshioka, Yukiko
2016-01-01
Chest auscultation is commonly performed during respiratory physical therapy (RPT). However, the changes in breath sounds in children with atelectasis have not been previously reported. The aim of this study was to clarify the characteristics of breath sounds in children with atelectasis using acoustic measurements. The subjects of this study were 13 children with right middle lobe atelectasis (3-7 years) and 14 healthy children (3-7 years). Lung sounds at the bilateral fifth intercostal spaces on the midclavicular line were recorded. The right-to-left ratio (R/L ratio) and the expiration to inspiration ratio (E/I ratio) of the breath sound sound pressure were calculated separately for three octave bands (100-200 Hz, 200-400 Hz, and 400-800 Hz). These data were then compared between the atelectasis and control groups. In addition, the same measurements were repeated after treatment, including RPT, in the atelectasis group. Before treatment, the inspiratory R/L ratios for all the frequency bands were significantly lower in the atelectasis group than in the control group, and the E/I ratios for all the frequency bands were significantly higher in the atelectasis group than in the control group. After treatment, the inspiratory R/L ratios of the atelectasis group did not increase significantly, but the E/I ratios decreased for all the frequency bands and became similar to those of the control group. Breath sound attenuation in the atelectatic area remained unchanged even after radiographical resolution, suggesting a continued decrease in local ventilation. On the other hand, the elevated E/I ratio for the atelectatic area was normalized after treatment. Therefore, the differences between inspiratory and expiration sound intensities may be an important marker of atelectatic improvement in children.
On Sound Footing: The Health of Your Feet
... on your feet. Poorly fitting shoes and other footwear are common causes of foot problems as well. ... Choices Foot Health Tips Use appropriate, well-fitting footwear. Wear clean socks. Keep your feet clean. Exercise ...
ERIC Educational Resources Information Center
Enciso Bernal, Ana Maria
2014-01-01
This study investigated the effects of concurrent audio and equivalent onscreen text on the ability of learners of English as a foreign language (EFL) to form associations between textual and aural forms of target vocabulary words. The study also looked at the effects of learner control over an audio sequence on the association of textual and…
Acquisition of initial /s/-stop and stop-/s/sequences in Greek.
Syrika, Asimina; Nicolaidis, Katerina; Edwards, Jan; Beckman, Mary E
2011-09-01
Previous work on children's acquisition of complex sequences points to a tendency for affricates to be acquired before clusters, but there is no clear evidence of a difference in order of acquisition between clusters with /s/ that violate the Sonority Sequencing Principle (SSP), such as /s/ followed by stop in onset position, and other clusters that obey the SSP. One problem with studies that have compared the acquisition of SSP-obeying and SSP-violating clusters is that the component sounds in the two types of sequences were different.This paper examines the acquisition of initial /s/-stop and stop-/s/ sequences by sixty Greek children aged 2 through 5 years. Results showed greater accuracy for the /s/-stop relative to the stop-/s/ sequences, but no difference in accuracy between /ts/, which is usually analyzed as an affricate in Greek, and the other stop-/s/ sequences. Moreover, errors for the /s/-stop sequences and /ts/ primarily involved stop substitutions, whereas errors for /ps/ and /ks/ were more variable and often involved fricative substitutions, a pattern which may have a perceptual explanation. Finally, /ts/ showed a distinct temporal pattern relative to the stop-/s/ clusters /ps/ and /ks/, similar to what has been reported for productions of Greek adults.
Acquisition of initial /s/-stop and stop-/s/ sequences in Greek
Syrika, Asimina; Nicolaidis, Katerina; Edwards, Jan; Beckman, Mary E.
2010-01-01
Previous work on children’s acquisition of complex sequences points to a tendency for affricates to be acquired before clusters, but there is no clear evidence of a difference in order of acquisition between clusters with /s/ that violate the Sonority Sequencing Principle (SSP), such as /s/ followed by stop in onset position, and other clusters that obey the SSP. One problem with studies that have compared the acquisition of SSP-obeying and SSP-violating clusters is that the component sounds in the two types of sequences were different. This paper examines the acquisition of initial /s/-stop and stop-/s/ sequences by sixty Greek children aged 2 through 5 years. Results showed greater accuracy for the /s/-stop relative to the stop-/s/ sequences, but no difference in accuracy between /ts/, which is usually analyzed as an affricate in Greek, and the other stop-/s/ sequences. Moreover, errors for the /s/-stop sequences and /ts/ primarily involved stop substitutions, whereas errors for /ps/ and /ks/ were more variable and often involved fricative substitutions, a pattern which may have a perceptual explanation. Finally, /ts/ showed a distinct temporal pattern relative to the stop-/s/ clusters /ps/ and /ks/, similarly to what has been reported for productions of Greek adults. PMID:22070044
Farris, Hamilton E; Ryan, Michael J
2017-03-01
Perceptually, grouping sounds based on their sources is critical for communication. This is especially true in túngara frog breeding aggregations, where multiple males produce overlapping calls that consist of an FM 'whine' followed by harmonic bursts called 'chucks'. Phonotactic females use at least two cues to group whines and chucks: whine-chuck spatial separation and sequence. Spatial separation is a primitive cue, whereas sequence is schema-based, as chuck production is morphologically constrained to follow whines, meaning that males cannot produce the components simultaneously. When one cue is available, females perceptually group whines and chucks using relative comparisons: components with the smallest spatial separation or those closest to the natural sequence are more likely grouped. By simultaneously varying the temporal sequence and spatial separation of a single whine and two chucks, this study measured between-cue perceptual weighting during a specific grouping task. Results show that whine-chuck spatial separation is a stronger grouping cue than temporal sequence, as grouping is more likely for stimuli with smaller spatial separation and non-natural sequence than those with larger spatial separation and natural sequence. Compared to the schema-based whine-chuck sequence, we propose that spatial cues have less variance, potentially explaining their preferred use when grouping during directional behavioral responses.
Molecular Phylogenetics: Concepts for a Newcomer.
Ajawatanawong, Pravech
Molecular phylogenetics is the study of evolutionary relationships among organisms using molecular sequence data. The aim of this review is to introduce the important terminology and general concepts of tree reconstruction to biologists who lack a strong background in the field of molecular evolution. Some modern phylogenetic programs are easy to use because of their user-friendly interfaces, but understanding the phylogenetic algorithms and substitution models, which are based on advanced statistics, is still important for the analysis and interpretation without a guide. Briefly, there are five general steps in carrying out a phylogenetic analysis: (1) sequence data preparation, (2) sequence alignment, (3) choosing a phylogenetic reconstruction method, (4) identification of the best tree, and (5) evaluating the tree. Concepts in this review enable biologists to grasp the basic ideas behind phylogenetic analysis and also help provide a sound basis for discussions with expert phylogeneticists.
Test of a motor theory of long-term auditory memory
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-01-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75–80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719
Test of a motor theory of long-term auditory memory.
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-05-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.
Detecting regular sound changes in linguistics as events of concerted evolution
Hruschka, Daniel J.; Branford, Simon; Smith, Eric D.; ...
2014-12-18
Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular soundmore » change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.« less
Detecting regular sound changes in linguistics as events of concerted evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hruschka, Daniel J.; Branford, Simon; Smith, Eric D.
Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular soundmore » change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.« less
The biomechanics of one-footed vertical jump performance in unilateral trans-tibial amputees.
Strike, S C; Diss, C
2005-04-01
This study investigated vertical jumps from single support for two trans-tibial amputees from a standing position. The mechanisms used to achieve flight and the compensatory mechanisms used in the production of force in the absence of plantarflexors are detailed. Two participants completed countermovement maximum vertical jumps from the prosthetic and the sound limbs. The jumps were recorded by a 7-camera 512 VICON motion analysis system integrated with a Kistler forceplate. Flight height was 5 cm jumping from the prosthetic side and 18-19 cm from the sound side. The countermovement was shallower and its duration was less on the prosthetic side compared to the sound side. The reduced and passive range of motion at the prosthesis resulted in an asymmetrical countermovement for both participants with the knee and ankle joints most affected. The duration of the push-off phase was not consistently affected. At take-off the joints on the sound side reached close to full extension while on the prosthetic side they remained more flexed. Joint extension velocity in the push-off phase was similar for both participants on the sound side, though the timing for participant 2 illustrated earlier peaks. The pattern of joint extension velocity was not a smooth proximal to distal sequence on the prosthetic side. The magnitude and timing of the inter-segment extensor moments were asymmetrical for both subjects. The power pattern was asymmetrical in both the countermovement and push-off phases; the lack of power generation at the ankle affected that produced at the remaining joints.
Ding, Nai; Pan, Xunyi; Luo, Cheng; Su, Naifei; Zhang, Wen; Zhang, Jianfeng
2018-01-31
How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention. SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention. Copyright © 2018 the authors 0270-6474/18/381178-11$15.00/0.
Auditory short-term memory in the primate auditory cortex.
Scott, Brian H; Mishkin, Mortimer
2016-06-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Case, J.E.; Barnes, D.F.; Plafker, George; Robbins, S.L.
1966-01-01
Sedimentary and volcanic rocks of Mesozoic and early Tertiary age form a roughly arcuate pattern in and around Prince William Sound, the epicentral region of the Alaska earthquake of 1964. These rocks include the Valdez Group, a predominantly slate and graywacke sequence of Jurassic and Cretaceous age, and the Orca Group, a younger sequence of early Tertiary age. The Orca consists of a lower unit of dense-average 2.87 g per cm3 (grams per cubic centimeter) pillow basalt and greenstone intercalated with sedimentary rocks and an upper unit of lithologically variable sandstone interbedded with siltstone or argillite. Densities of the clastic rocks in both the Valdez and Orca Groups average about 2.69 g per cm3. Granitic rocks of relatively low density (2.62 g per cm3) cut the Valdez and Orca Groups at several localities. Both the Valdez and the Orca Groups were complexly folded and extensively faulted during at least three major episodes of deformation: an early period of Cretaceous or early Tertiary orogeny, a second orogeny that probably culminated in late Eocene or early Oligocene time and was accompanied or closely followed by emplacement of granitic batholiths, and a third episode of deformation that began in late Cenozoic time and continued intermittently to the present. About 500 gravity stations were established in the Prince William Sound region in conjunction with postearthquake geologic investigations. Simple Bouguer anomaly contours trend approximately parallel to the arcuate geologic structure around the sound. Bouguer anomalies decrease northward from +40 mgal (milligals) at the southwestern end of Montague Island to -70 mgal at College and Harriman Fiords. Most of this change may be interpreted as a regional gradient caused by thickening of the continental crust. Superimposed on the gradient is a prominent gravity high of as much as 65 mgal that extends from Elrington Island on the southwest, across Knight and Glacier Islands to the Ellamar Peninsula and Valdez on the northeast. This high coincides with the wide belt of greenstone and pillow basalt of the Orca Group and largely reflects the high density of these volcanic rocks. A large low in the east-central part of the sound is inferred to have a composite origin, and results from the combined effects of low-density sedimentary and granitic rocks. The Prince William Sound gravity high extends southwest-northeast without major horizontal offset for more than 100 miles. Thus the belt of volcanic rocks causing the high constitutes a major virtually continuous, geologic element of south-central Alaska.
[HYGIENIC ASSESSMENT OF NOISE FACTOR OF THE LARGE CITY].
Chubirko, M L; Stepkin, Yu I; Seredenko, O V
2015-01-01
The article is devoted to the problem of the negative impact of traffic noise on the health and living conditions of the population in conditions of the large city. Every day on the streets there are appeared more and more different modes of transport, and to date almost all transportation network has reached his traffic performance. The increase in traffic noise certainly has an impact on the human body. The most common and intense noise is caused by the traffic of urban automobile and electric transport. This is explained by the existence of the heavy traffic (2-3 thousand crews/h) on almost all main roads in historically emerged parts of the city. In addition, sources of external noise in the city can be a railway running in residential zone, access roads, industrial enterprises, located in close proximity to residential areas and on the borders of residential zones, planes of military and civil aviation. For the evaluation of the different noises sound levels were measured with the use of sound level meters. The most common parameter for the assessment ofthe noise generatedfrom motor vehicles on residential areas and usedfor the noise characteristics of the traffic flows, is the equivalent sound level/A EQ dB. This parameter is used in the majority of normative-technical documentation as hygienic noise standard. With the aim of the assessment of noise exposure there were selected 122 control points at intersections of roads of different traffic performance where there were made instrumental measurements the equivalent sound level, followed by its comparison with permissible levels.
Neural plasticity associated with recently versus often heard objects.
Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie
2012-09-01
In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.
Matsuda, Osamu; Hara, Masashi; Tobita, Hiroyuki; Yazaki, Kenichi; Nakagawa, Toshinori; Shimizu, Kuniyoshi; Uemura, Akira; Utsugi, Hajime
2015-01-01
Regeneration of planted forests of Cryptomeria japonica (sugi) and Chamaecyparis obtuse (hinoki) is the pressing importance to the forest administration in Japan. Low seed germination rate of these species, however, has hampered low-cost production of their seedlings for reforestation. The primary cause of the low germinability has been attributed to highly frequent formation of anatomically unsound seeds, which are indistinguishable from sound germinable seeds by visible observation and other common criteria such as size and weight. To establish a method for sound seed selection in these species, hyperspectral imaging technique was used to identify a wavelength range where reflectance spectra differ clearly between sound and unsound seeds. In sound seeds of both species, reflectance in a narrow waveband centered at 1,730 nm, corresponding to a lipid absorption band in the short-wavelength infrared (SWIR) range, was greatly depressed relative to that in adjacent wavebands on either side. Such depression was absent or less prominent in unsound seeds. Based on these observations, a reflectance index SQI, abbreviated for seed quality index, was formulated using reflectance at three narrow SWIR wavebands so that it represents the extent of the depression. SQI calculated from seed area-averaged reflectance spectra and spatial distribution patterns of pixelwise SQI within each seed area were both proven as reliable criteria for sound seed selection. Enrichment of sound seeds was accompanied by an increase in germination rate of the seed lot. Thus, the methods described are readily applicable toward low-cost seedling production in combination with single seed sowing technology. PMID:26083366
Broadband Metamaterial for Nonresonant Matching of Acoustic Waves
2012-03-28
35898, USA. Unity transmittance at an interface between bulk media is quite common for polarized electromagnetic waves incident at the Brewster angle ...metamaterial possessing a Brewster -like angle that is completely transparent to sound waves over an ultra-broadband frequency range with .100% bandwidth...Unity transmittance at an interface between bulk media is quite common for polarized electromagnetic waves incident at the Brewster angle , but it is
Common pressure vessel development for the nickel hydrogen technology
NASA Technical Reports Server (NTRS)
Holleck, G.
1981-01-01
The design of a pressure vessel nickel hydrogen cell is described. The cell has the following key features: it eliminates electrolyte bridging; provides for independent electrolyte management for each unit stack; provides for independent oxygen management for each unit stack; has good heat dissipation; has a mechanically sound and practical interconnection; and has the maximum in common with state of the art individual pressure vessel technology.
A Millennial Challenge: Extremism in Uncertain Times
Fiske, Susan T.
2014-01-01
This comment highlights the relevance and importance of the uncertainty-extremism topic, both scientifically and societally, identifies common themes, locates this work in a wider scientific and social context, describes what we now know and what we still do not, acknowledges some limitations, foreshadowing future directions, and discusses some potential policy relevance. Common themes emerge around the importance of social justice as sound anti-extremism policy. PMID:24511155
Inexpensive Instruments for a Sound Unit
NASA Astrophysics Data System (ADS)
Brazzle, Bob
2011-04-01
My unit on sound and waves is embedded within a long-term project in which my high school students construct a musical instrument out of common materials. The unit culminates with a performance assessment: students play the first four measures of "Somewhere Over the Rainbow"—chosen because of the octave interval of the first two notes—in the key of C, and write a short paper describing the theory underlying their instrument. My students have done this project for the past three years, and it continues to evolve. This year I added new instructional materials that I developed using a freeware program called Audacity. This software is very intuitive, and my students used it to develop their musical instruments. In this paper I will describe some of the inexpensive instructional materials in my sound unit, and how they fit with my learning goals.
If you have Tourette syndrome, you make unusual movements or sounds, called tics. You have little or no control over them. Common tics are throat- ... spin, or, rarely, blurt out swear words. Tourette syndrome is a disorder of the nervous system. It ...
Adaptive gain and filtering circuit for a sound reproduction system
NASA Technical Reports Server (NTRS)
Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor)
1998-01-01
Adaptive compressive gain and level dependent spectral shaping circuitry for a hearing aid include a microphone to produce an input signal and a plurality of channels connected to a common circuit output. Each channel has a preset frequency response. Each channel includes a filter with a preset frequency response to receive the input signal and to produce a filtered signal, a channel amplifier to amplify the filtered signal to produce a channel output signal, a threshold register to establish a channel threshold level, and a gain circuit. The gain circuit increases the gain of the channel amplifier when the channel output signal falls below the channel threshold level and decreases the gain of the channel amplifier when the channel output signal rises above the channel threshold level. A transducer produces sound in response to the signal passed by the common circuit output.
Performance of an open-source heart sound segmentation algorithm on eight independent databases.
Liu, Chengyu; Springer, David; Clifford, Gari D
2017-08-01
Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.
Plummer, Emily Megan; Goller, Franz
2008-01-01
Song of the zebra finch (Taeniopygia guttata) is a complex temporal sequence generated by a drastic change to the regular oscillations of the normal respiratory pattern. It is not known how respiratory functions, such as supply of air volume and gas exchange, are controlled during song. To understand the integration between respiration and song, we manipulated respiration during song by injecting inert dental medium into the air sacs. Increased respiratory rate after injections indicates that the reduction of air affected quiet respiration and that birds compensated for the reduced air volume. During song, air sac pressure, tracheal airflow and sound amplitude decreased substantially with each injection. This decrease was consistently present during each expiratory pulse of the song motif irrespective of the air volume used. Few changes to the temporal pattern of song were noted, such as the increased duration of a minibreath in one bird and the decrease in duration of a long syllable in another bird. Despite the drastic reduction in air sac pressure, airflow and sound amplitude, no increase in abdominal muscle activity was seen. This suggests that during song, birds do not compensate for the reduced physiological or acoustic parameters. Neither somatosensory nor auditory feedback mechanisms appear to effect a correction in expiratory effort to compensate for reduced air sac pressure and sound amplitude.
Physiologic effects of voice stimuli in conscious and unconscious palliative patients-a pilot study.
Buchholz, Kerstin; Liebl, Patrick; Keinki, Christian; Herth, Natalie; Huebner, Jutta
2018-05-01
Sounds and acoustic stimuli can have an effect on human beings. In medical care, sounds are often used as parts of therapies, e. g., in different types of music therapies. Also, human speech greatly affects the mental status. Although calming sounds and music are widely established in the medical field, clear evidence for the effect of sounds in palliative care is scare, and data about effects of the human voice in general are still missing. Thus, the aim of this study was to evaluate the effects of different voice stimuli on palliative patients. Two different voice stimuli (one calm, the other turbulent) were presented in a randomized sequence, and physiological parameters (blood pressure, heart frequency, oxygen saturation, respiratory rate) were recorded. Twenty patients (14 conscious and 6 unconscious) participated in this study. There was a decrease of heart frequency as well as an increase of oxygen saturation in the group of conscious patients, whereas no significant change of blood pressure or respiratory rate were detected in either group, conscious and unconscious patients. Although our dataset is heterogeneous, it can be concluded that voice stimuli can influence conscious patients. However, in this setting, no effect on unconscious patients was demonstrated. More clinical research on this topic with larger groups and a broader spectrum of parameters is needed.
Liao, Jing; Chao, Zhi; Zhang, Liang
2013-11-01
To identify the common snakes in medicated liquor of Guangdong using COI barcode sequence,and to test the feasibility. The COI barcode sequences of collected medicinal snakes were amplified and sequenced. The sequences combined with the data from GenBank were analyzed for divergence and building a neighbor-joining(NJ) tree with MEGA 5.0. The genetic distance and NJ tree demonstrated that there were 241 variable sites in these species, and the average (A + T) content of 56.2% was higher than the average (G + C) content of 43.7%. The maximum interspecific genetic distance was 0.2568, and the minimum was 0. 1519. In the NJ tree,each species formed a monophyletic clade with bootstrap supports of 100%. DNA barcoding identification method based on the COI sequence is accurate and can be applied to identify the common medicinal snakes.
Is the phonological similarity effect in working memory due to proactive interference?
Baddeley, Alan D; Hitch, Graham J; Quinlan, Philip T
2018-04-12
Immediate serial recall of verbal material is highly sensitive to impairment attributable to phonological similarity. Although this has traditionally been interpreted as a within-sequence similarity effect, Engle (2007) proposed an interpretation based on interference from prior sequences, a phenomenon analogous to that found in the Peterson short-term memory (STM) task. We use the method of serial reconstruction to test this in an experiment contrasting the standard paradigm in which successive sequences are drawn from the same set of phonologically similar or dissimilar words and one in which the vowel sound on which similarity is based is switched from trial to trial, a manipulation analogous to that producing release from PI in the Peterson task. A substantial similarity effect occurs under both conditions although there is a small advantage from switching across similar sequences. There is, however, no evidence for the suggestion that the similarity effect will be absent from the very first sequence tested. Our results support the within-sequence similarity rather than a between-list PI interpretation. Reasons for the contrast with the classic Peterson short-term forgetting task are briefly discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).