Technology, Sound and Popular Music.
ERIC Educational Resources Information Center
Jones, Steve
The ability to record sound is power over sound. Musicians, producers, recording engineers, and the popular music audience often refer to the sound of a recording as something distinct from the music it contains. Popular music is primarily mediated via electronics, via sound, and not by means of written notes. The ability to preserve or modify…
Time course of the influence of musical expertise on the processing of vocal and musical sounds.
Rigoulot, S; Pell, M D; Armony, J L
2015-04-02
Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Musical Sound, Instruments, and Equipment
NASA Astrophysics Data System (ADS)
Photinos, Panos
2017-12-01
'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.
Perez-Cruz, Pedro; Nguyen, Linh; Rhondali, Wadih; Hui, David; Palmer, J Lynn; Sevy, Ingrid; Richardson, Michael; Bruera, Eduardo
2012-10-01
Background music can be used to distract from ordinary sounds and improve wellbeing in patient care areas. Little is known about individuals' attitudes and beliefs about music versus ordinary sound in this setting. To assess the preferences of patients, caregivers and healthcare providers regarding background music or ordinary sound in outpatient and inpatient care areas, and to explore their attitudes and perceptions towards music in general. All participants were exposed to background music in outpatient or inpatient clinical settings. 99 consecutive patients, 101 caregivers and 65 out of 70 eligible healthcare providers (93%) completed a survey about music attitudes and preferences. The primary outcome was a preference for background music over ordinary sound in patient care areas. Preference for background music was high and similar across groups (70 patients (71%), 71 caregivers (71%) and 46 providers (71%), p=0.58). The three groups had very low disapproval for background music in patient care areas (10%, 9% and 12%, respectively; p=0.91). Black ethnicity independently predicted lower preference for background music (OR: 0.47, 95%CI: 0.23, 0.98). Patients, caregivers and providers reported recent use of music for themselves for the purpose of enjoyment (69%, 80% and 86% respectively p=0.02). Age, gender, religion and education level significantly predicted preferences for specific music styles. Background music in patient care areas was preferred to ordinary sound by patients, caregivers and providers. Demographics of the population are strong determinants of music style preferences.
Perez-Cruz, Pedro; Nguyen, Linh; Rhondali, Wadih; Hui, David; Palmer, J. Lynn; Sevy, Ingrid; Richardson, Michael
2012-01-01
Abstract Background Background music can be used to distract from ordinary sounds and improve wellbeing in patient care areas. Little is known about individuals' attitudes and beliefs about music versus ordinary sound in this setting. Objectives To assess the preferences of patients, caregivers and healthcare providers regarding background music or ordinary sound in outpatient and inpatient care areas, and to explore their attitudes and perceptions towards music in general. Methods All participants were exposed to background music in outpatient or inpatient clinical settings. 99 consecutive patients, 101 caregivers and 65 out of 70 eligible healthcare providers (93%) completed a survey about music attitudes and preferences. The primary outcome was a preference for background music over ordinary sound in patient care areas. Results Preference for background music was high and similar across groups (70 patients (71%), 71 caregivers (71%) and 46 providers (71%), p=0.58). The three groups had very low disapproval for background music in patient care areas (10%, 9% and 12%, respectively; p=0.91). Black ethnicity independently predicted lower preference for background music (OR: 0.47, 95%CI: 0.23, 0.98). Patients, caregivers and providers reported recent use of music for themselves for the purpose of enjoyment (69%, 80% and 86% respectively p=0.02). Age, gender, religion and education level significantly predicted preferences for specific music styles. Conclusion Background music in patient care areas was preferred to ordinary sound by patients, caregivers and providers. Demographics of the population are strong determinants of music style preferences. PMID:22957677
Musical Sounds, Motor Resonance, and Detectable Agency.
Launay, Jacques
This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.
Musical Sounds, Motor Resonance, and Detectable Agency
LAUNAY, JACQUES
2016-01-01
This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social. PMID:27122999
Fuller, Christina; Free, Rolien; Maat, Bert; Başkent, Deniz
2012-08-01
In normal-hearing listeners, musical background has been observed to change the sound representation in the auditory system and produce enhanced performance in some speech perception tests. Based on these observations, it has been hypothesized that musical background can influence sound and speech perception, and as an extension also the quality of life, by cochlear-implant users. To test this hypothesis, this study explored musical background [using the Dutch Musical Background Questionnaire (DMBQ)], and self-perceived sound and speech perception and quality of life [using the Nijmegen Cochlear Implant Questionnaire (NCIQ) and the Speech Spatial and Qualities of Hearing Scale (SSQ)] in 98 postlingually deafened adult cochlear-implant recipients. In addition to self-perceived measures, speech perception scores (percentage of phonemes recognized in words presented in quiet) were obtained from patient records. The self-perceived hearing performance was associated with the objective speech perception. Forty-one respondents (44% of 94 respondents) indicated some form of formal musical training. Fifteen respondents (18% of 83 respondents) judged themselves as having musical training, experience, and knowledge. No association was observed between musical background (quantified by DMBQ), and self-perceived hearing-related performance or quality of life (quantified by NCIQ and SSQ), or speech perception in quiet.
Furnham, Adrian; Strbac, Lisa
2002-02-20
Previous research has found that introverts' performance on complex cognitive tasks is more negatively affected by distracters, e.g. music and background television, than extraverts' performance. This study extended previous research by examining whether background noise would be as distracting as music. In the presence of silence, background garage music and office noise, 38 introverts and 38 extraverts carried out a reading comprehension task, a prose recall task and a mental arithmetic task. It was predicted that there would be an interaction between personality and background sound on all three tasks: introverts would do less well on all of the tasks than extraverts in the presence of music and noise but in silence performance would be the same. A significant interaction was found on the reading comprehension task only, although a trend for this effect was clearly present on the other two tasks. It was also predicted that there would be a main effect for background sound: performance would be worse in the presence of music and noise than silence. Results confirmed this prediction. These findings support the Eysenckian hypothesis of the difference in optimum cortical arousal in introverts and extraverts.
Sound Stories for General Music
ERIC Educational Resources Information Center
Cardany, Audrey Berger
2013-01-01
Language and music literacy share a similar process of understanding that progresses from sensory experience to symbolic representation. The author identifies Bruner’s modes of understanding as they relate to using narrative in the music classroom to enhance music reading at iconic and symbolic levels. Two sound stories are included for…
Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.
Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing
2016-01-01
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
Influence of background music on work attention in clients with chronic schizophrenia.
Shih, Yi-Nuo; Chen, Chi-Sheng; Chiang, Hsin-Yu; Liu, Chien-Hsiou
2015-01-01
Work attention in persons with chronic schizophrenia is an important issue in vocational rehabilitation. Some of the research literature indicates that background music may influence visual attention performance. Based on the theory of occupational therapy, environmental sounds, colors and decorations may affect individual performance, this study thus examined the influence of music on work attention in persons with schizophrenia. Participants were recruited from a halfway house in Taipei. Forty-nine (49) patients with chronic schizophrenia volunteered. They had been accepted into vocational rehabilitation and a work-seeking program. The sample included 20 females and 29 males. The participant ages ranged between 29 and 63 years old, and their average age was 47 years old. Using a randomized controlled trial (RCT) study, the participants were assigned to one of three conditions: quiet environment as the control group (n= 16), classical light music as background music (n= 16), and popular music as background music (n= 17). For Group 1 (control group/quiet environment), there was no significant variance (sig = 0.172). For Group 2 (Classical light music), the intervention revealed significant variance (sig = 0.071*). For Group 3 (popular music), the intervention had significant variance (sig = 0.048**). The introduction of background music tended to increase attention test scores of persons with schizophrenia. Moreover, the increase in test attention scores was statistically significant when popular music was played in the background. This result suggested that background music may improve attention performance of persons with chronic schizophrenia. Future research is required with a larger sample size to support the study results.
Toward blind removal of unwanted sound from orchestrated music
NASA Astrophysics Data System (ADS)
Chang, Soo-Young; Chun, Joohwan
2000-11-01
The problem addressed in this paper is to removing unwanted sounds from music sound. The sound to be removed could be disturbance such as cough. We shall present some preliminary results on this problem using statistical properties of signals. Our approach consists of three steps. We first estimate the fundamental frequencies and partials given noise-corrupted music sound. This gives us the autoregressive (AR) model of the music sound. Then we filter the noise-corrupted sound using the AR parameters. The filtered signal is then subtracted from the original noise-corrupted signal to get the disturbance. Finally, the obtained disturbance is used a reference signal to eliminate the disturbance from the noise- corrupted music signal. Above three steps are carried out in a recursive manner using a sliding window or an infinitely growing window with an appropriate forgetting factor.
Sound exposure during outdoor music festivals.
Tronstad, Tron V; Gelderblom, Femke B
2016-01-01
Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure.
Sound Exposure During Outdoor Music Festivals
Tronstad, Tron V.; Gelderblom, Femke B.
2016-01-01
Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410
Reverberation negatively impacts musical sound quality for cochlear implant users.
Roy, Alexis T; Vigeant, Michelle; Munjal, Tina; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J
2015-09-01
Satisfactory musical sound quality remains a challenge for many cochlear implant (CI) users. In particular, questionnaires completed by CI users suggest that reverberation due to room acoustics can negatively impact their music listening experience. The objective of this study was to more specifically characterize of the effect of reverberation on musical sound quality in CI users, normal hearing (NH) non-musicians, and NH musicians using a previously designed assessment method, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this method, listeners were randomly presented with an anechoic musical segment and five-versions of this segment in which increasing amounts of reverberation were artificially added. Participants listened to the six reverberation versions and provided sound quality ratings between 0 (very poor) and 100 (excellent). Results demonstrated that on average CI users and NH non-musicians preferred the sound quality of anechoic versions to more reverberant versions. In comparison, NH musicians could be delineated into those who preferred the sound quality of anechoic pieces and those who preferred pieces with some reverberation. This is the first study, to our knowledge, to objectively compare the effects of reverberation on musical sound quality ratings in CI users. These results suggest that musical sound quality for CI users can be improved by non-reverberant listening conditions and musical stimuli in which reverberation is removed.
Generative electronic background music system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazurowski, Lukasz
In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.
Sound Levels and Risk Perceptions of Music Students During Classes.
Rodrigues, Matilde A; Amorim, Marta; Silva, Manuela V; Neves, Paula; Sousa, Aida; Inácio, Octávio
2015-01-01
It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians' exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.
Sight over sound in the judgment of music performance.
Tsay, Chia-Jung
2013-09-03
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content.
Sight over sound in the judgment of music performance
Tsay, Chia-Jung
2013-01-01
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content. PMID:23959902
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
Ecoacoustic Music for Geoscience: Sonic Physiographies and Sound Casting
NASA Astrophysics Data System (ADS)
Burtner, M.
2017-12-01
The author describes specific ecoacoustic applications in his original compositions, Sonic Physiography of a Time-Stretched Glacier (2015), Catalog of Roughness (2017), Sound Cast of Matanuska Glacier (2016) and Ecoacoustic Concerto (Eagle Rock) (2014). Ecoacoustic music uses technology to map systems from nature into music through techniques such as sonification, material amplification, and field recording. The author aspires for this music to be descriptive of the data (as one would expect from a visualization) and also to function as engaging and expressive music/sound art on its own. In this way, ecoacoustic music might provide a fitting accompaniment to a scientific presentation (such as music for a science video) while also offering an exemplary concert hall presentation for a dedicated listening public. The music can at once support the communication of scientific research, and help science make inroads into culture. The author discusses how music created using the data, sounds and methods derived from earth science can recast this research into a sonic art modality. Such music can amplify the communication and dissemination of scientific knowledge by broadening the diversity of methods and formats we use to bring excellent scientific research to the public. Music can also open the public's imagination to science, inspiring curiosity and emotional resonance. Hearing geoscience as music may help a non-scientist access scientific knowledge in new ways, and it can greatly expand the types of venues in which this work can appear. Anywhere music is played - concert halls, festivals, galleries, radio, etc - become a venue for scientific discovery.
Hutter, E; Grapp, M; Argstatter, H
2016-12-01
People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.
Assessment of sound quality perception in cochlear implant users during music listening.
Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J
2012-04-01
Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this
Proverbio, Alice Mado; Mado Proverbio, C A Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto
2015-10-15
The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.
Mado Proverbio, C.A. Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto
2015-01-01
The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding. PMID:26469712
Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding.
Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira
2014-01-01
Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.
Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari
2012-06-01
Musicians' skills in auditory processing depend highly on instrument, performance practice, and on level of expertise. Yet, it is not known though whether the style/genre of music might shape auditory processing in the brains of musicians. Here, we aimed at tackling the role of musical style/genre on modulating neural and behavioral responses to changes in musical features. Using a novel, fast and musical sounding multi-feature paradigm, we measured the mismatch negativity (MMN), a pre-attentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, rock/pop) and in non-musicians. Jazz and classical musicians scored higher in the musical aptitude test than band musicians and non-musicians, especially with regards to tonal abilities. These results were extended by the MMN findings: jazz musicians had larger MMN-amplitude than all other experimental groups across the six different sound features, indicating a greater overall sensitivity to auditory outliers. In particular, we found enhanced processing of pith and sliding up to pitches in jazz musicians only. Furthermore, we observed a more frontal MMN to pitch and location compared to the other deviants in jazz musicians and left lateralization of the MMN to timbre in classical musicians. These findings indicate that the characteristics of the style/genre of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in a musical context. Musicians' brain is hence shaped by the type of training, musical style/genre, and listening experiences. Copyright © 2012 Elsevier Ltd. All rights reserved.
Music and Sound in Time Processing of Children with ADHD
Carrer, Luiz Rogério Jorgensen
2015-01-01
ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families’ lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6–14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant’s performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. Results: (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD. PMID:26441688
Music and Sound in Time Processing of Children with ADHD.
Carrer, Luiz Rogério Jorgensen
2015-01-01
ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.
Getting Involved in Shaping the Sounds of Black Music
ERIC Educational Resources Information Center
Reeder, Barbara
1972-01-01
Black music of African and Afro-American peoples is particularly attuned to sounds, has a metronome sense, and is analyzed through use of a density referent by which students become involved in musical compositon. (RK)
2013-01-01
Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126
Elmer, Stefan; Klein, Carina; Kühnis, Jürg; Liem, Franziskus; Meyer, Martin; Jäncke, Lutz
2014-10-01
In this study, we used high-density EEG to evaluate whether speech and music expertise has an influence on the categorization of expertise-related and unrelated sounds. With this purpose in mind, we compared the categorization of speech, music, and neutral sounds between professional musicians, simultaneous interpreters (SIs), and controls in response to morphed speech-noise, music-noise, and speech-music continua. Our hypothesis was that music and language expertise will strengthen the memory representations of prototypical sounds, which act as a perceptual magnet for morphed variants. This means that the prototype would "attract" variants. This so-called magnet effect should be manifested by an increased assignment of morphed items to the trained category, by a reduced maximal slope of the psychometric function, as well as by differential event-related brain responses reflecting memory comparison processes (i.e., N400 and P600 responses). As a main result, we provide first evidence for a domain-specific behavioral bias of musicians and SIs toward the trained categories, namely music and speech. In addition, SIs showed a bias toward musical items, indicating that interpreting training has a generic influence on the cognitive representation of spectrotemporal signals with similar acoustic properties to speech sounds. Notably, EEG measurements revealed clear distinct N400 and P600 responses to both prototypical and ambiguous items between the three groups at anterior, central, and posterior scalp sites. These differential N400 and P600 responses represent synchronous activity occurring across widely distributed brain networks, and indicate a dynamical recruitment of memory processes that vary as a function of training and expertise.
Background Music and the Learning Environment: Borrowing from other Disciplines
ERIC Educational Resources Information Center
Griffin, Michael
2006-01-01
Human beings have always enjoyed a special relationship with the organisation of audible sound we call music. Through the passage of time, the roles and functions of music have represented manifold expressions to people, and in the present day music is ubiquitous and readily available to all who seek it. Recent advances in digital music technology…
Know thy sound: perceiving self and others in musical contexts.
Sevdalis, Vassilis; Keller, Peter E
2014-10-01
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory-motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual-motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice. Copyright © 2014 Elsevier B.V. All rights reserved.
Effects of placement point of background music on shopping website.
Lai, Chien-Jung; Chiang, Chia-Chi
2012-01-01
Consumer on-line behaviors are more important than ever due to highly growth of on-line shopping. The purposes of this study were to design placement methods of background music for shopping website and examine the effect on browsers' emotional and cognitive response. Three placement points of background music during the browsing, i.e. 2 min., 4 min., and 6 min. from the start of browsing were considered for entry points. Both browsing without music (no music) and browsing with constant music volume (full music) were treated as control groups. Participants' emotional state, approach-avoidance behavior intention, and action to adjust music volume were collected. Results showed that participants had a higher level of pleasure, arousal and approach behavior intention for the three placement points than for no music and full music. Most of the participants for full music (5/6) adjusted the background music. Only 16.7% (3/18) participants for other levels turn off the background music. The results indicate that playing background music after the start of browsing is benefit for on-line shopping atmosphere. It is inappropriate to place background music at the start of browsing shopping website. The marketer must manipulated placement methods of background music for a web store carefully.
Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D
2009-10-01
By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.
Baralea, Francesco; Minazzi, Vera
2008-10-01
The authors note that the element of sound and music has no place in the model of mental functioning bequeathed to us by Freud, which is dominated by the visual and the representational. They consider the reasons for this exclusion and its consequences, and ask whether the simple biographical explanation offered by Freud himself is acceptable. This contribution reconstructs the historical and cultural background to that exclusion, cites some relevant emblematic passages, and discusses Freud's position on music and on the aesthetic experience in general. Particular attention is devoted to the relationship between Freud and Lipps, which is important both for the originality of Lipps's thinking in the turn-of-the-century debate and for his ideas on the musical aspects of the foundations of psychic life, at which Freud 'stopped', as he himself wrote. Moreover, the shade of Lipps accompanied Freud throughout his scientific career from 1898 to 1938. Like all foundations, that of psychoanalysis was shaped by a system of inclusions and exclusions. The exclusion of the element of sound and music is understandable in view of the cultural background to the development of the concepts of the representational unconscious and infantile sexuality. While the consequences have been far reaching, the knowledge accumulated since that exclusion enables us to resume, albeit on a different basis, the composition of the 'unfinished symphony' of the relationship between psychoanalysis and music.
The sound of arousal in music is context-dependent.
Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter
2012-10-23
Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.
A national project to evaluate and reduce high sound pressure levels from music.
Ryberg, Johanna Bengtsson
2009-01-01
The highest recommended sound pressure levels for leisure sounds (music) in Sweden are 100 dB LAeq and 115 dB LAFmax for adults, and 97 dB LAeq and 110 dB LAFmax where children under the age of 13 have access. For arrangements intended for children, levels should be consistently less than 90 dB LAeq. In 2005, a national project was carried out with the aim of improving environments with high sound pressure levels from music, such as concert halls, restaurants, and cinemas. The project covered both live and recorded music. Of Sweden's 290 municipalities, 134 took part in the project, and 93 of these carried out sound measurements. Four hundred and seventy one establishments were investigated, 24% of which exceeded the highest recommended sound pressure levels for leisure sounds in Sweden. Of festival and concert events, 42% exceeded the recommended levels. Those who visit music events/establishments thus run a relatively high risk of exposure to harmful sound levels. Continued supervision in this field is therefore crucial.
Background instrumental music and serial recall.
Nittono, H
1997-06-01
Although speech and vocal music are consistently shown to impair serial recall for visually presented items, instrumental music does not always produce a significant disruption. This study investigated the features of instrumental music that would modulate the disruption in serial recall. 24 students were presented sequences of nine digits and required to recall the digits in order of presentation. Instrumental music as played either forward or backward during the task. Forward music caused significantly more disruption than did silence, whereas the reversed music did not. Some higher-order factor may be at work in the effect of background music on serial recall.
Sound Richness of Music Might Be Mediated by Color Perception: A PET Study.
Satoh, Masayuki; Nagata, Ken; Tomimoto, Hidekazu
2015-01-01
We investigated the role of the fusiform cortex in music processing with the use of PET, focusing on the perception of sound richness. Musically naïve subjects listened to familiar melodies with three kinds of accompaniments: (i) an accompaniment composed of only three basic chords (chord condition), (ii) a simple accompaniment typically used in traditional music text books in elementary school (simple condition), and (iii) an accompaniment with rich and flowery sounds composed by a professional composer (complex condition). Using a PET subtraction technique, we studied changes in regional cerebral blood flow (rCBF) in simple minus chord, complex minus simple, and complex minus chord conditions. The simple minus chord, complex minus simple, and complex minus chord conditions regularly showed increases in rCBF at the posterior portion of the inferior temporal gyrus, including the LOC and fusiform gyrus. We may conclude that certain association cortices such as the LOC and the fusiform cortex may represent centers of multisensory integration, with foreground and background segregation occurring at the LOC level and the recognition of richness and floweriness of stimuli occurring in the fusiform cortex, both in terms of vision and audition.
Effects of musical expertise on oscillatory brain activity in response to emotional sounds.
Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L
2017-08-01
Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Olsen, Kirk N; Dean, Roger T; Leung, Yvonne
2016-01-01
Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event 'hazard' analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral
The sound of arousal in music is context-dependent
Blumstein, Daniel T.; Bryant, Gregory A.; Kaye, Peter
2012-01-01
Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus. PMID:22696288
What Does Music Sound Like for a Cochlear Implant User?
Jiam, Nicole T; Caldwell, Meredith T; Limb, Charles J
2017-09-01
Cochlear implant research and product development over the past 40 years have been heavily focused on speech comprehension with little emphasis on music listening and enjoyment. The relatively little understanding of how music sounds in a cochlear implant user stands in stark contrast to the overall degree of importance the public places on music and quality of life. The purpose of this article is to describe what music sounds like to cochlear implant users, using a combination of existing research studies and listener descriptions. We examined the published literature on music perception in cochlear implant users, particularly postlingual cochlear implant users, with an emphasis on the primary elements of music and recorded music. Additionally, we administered an informal survey to cochlear implant users to gather first-hand descriptions of music listening experience and satisfaction from the cochlear implant population. Limitations in cochlear implant technology lead to a music listening experience that is significantly distorted compared with that of normal hearing listeners. On the basis of many studies and sources, we describe how music is frequently perceived as out-of-tune, dissonant, indistinct, emotionless, and weak in bass frequencies, especially for postlingual cochlear implant users-which may in part explain why music enjoyment and participation levels are lower after implantation. Additionally, cochlear implant users report difficulty in specific musical contexts based on factors including but not limited to genre, presence of lyrics, timbres (woodwinds, brass, instrument families), and complexity of the perceived music. Future research and cochlear implant development should target these areas as parameters for improvement in cochlear implant-mediated music perception.
Exposure to excessive sounds and hearing status in academic classical music students.
Pawlaczyk-Łuszczyńska, Małgorzata; Zamojska-Daniszewska, Małgorzata; Dudarewicz, Adam; Zaborowski, Kamil
2017-02-21
The aim of this study was to assess hearing of music students in relation to their exposure to excessive sounds. Standard pure-tone audiometry (PTA) was performed in 168 music students, aged 22.5±2.5 years. The control group included 67 subjects, non-music students and non-musicians, aged 22.8±3.3 years. Data on the study subjects' musical experience, instruments in use, time of weekly practice and additional risk factors for noise-induced hearing loss (NIHL) were identified by means of a questionnaire survey. Sound pressure levels produced by various groups of instruments during solo and group playing were also measured and analyzed. The music students' audiometric hearing threshold levels (HTLs) were compared with the theoretical predictions calculated according to the International Organization for Standardization standard ISO 1999:2013. It was estimated that the music students were exposed for 27.1±14.3 h/week to sounds at the A-weighted equivalent-continuous sound pressure level of 89.9±6.0 dB. There were no significant differences in HTLs between the music students and the control group in the frequency range of 4000-8000 Hz. Furthermore, in each group HTLs in the frequency range 1000-8000 Hz did not exceed 20 dB HL in 83% of the examined ears. Nevertheless, high frequency notched audiograms typical of the noise-induced hearing loss were found in 13.4% and 9% of the musicians and non-musicians, respectively. The odds ratio (OR) of notching in the music students increased significantly along with higher sound pressure levels (OR = 1.07, 95% confidence interval (CI): 1.014-1.13, p < 0.05). The students' HTLs were worse (higher) than those of a highly screened non-noise-exposed population. Moreover, their hearing loss was less severe than that expected from sound exposure for frequencies of 3000 Hz and 4000 Hz, and it was more severe in the case of frequency of 6000 Hz. The results confirm the need for further studies and development of a hearing
A Little Background Music, Please.
ERIC Educational Resources Information Center
Giles, Martha Mead
1991-01-01
Background music could be used to provide a pleasant beginning for the school day, to help keep students quiet and relaxed in the school cafeteria at lunchtime, and to provide a midafternoon lift for bored and tired children. The most effective music pleases children without overly exciting them through jarring rhythms and loud dynamics. (nine…
Exploring the effect of sound and music on health in hospital settings: A narrative review.
Iyendo, Timothy Onosahwo
2016-11-01
Sound in hospital space has traditionally been considered in negative terms as both intrusive and unwanted, and based mainly on sound levels. However, sound level is only one aspect of the soundscape. There is strong evidence that exploring the positive aspect of sound in a hospital context can evoke positive feelings in both patients and nurses. Music psychology studies have also shown that music intervention in health care can have a positive effect on patient's emotions and recuperating processes. In this way, hospital spaces have the potential to reduce anxiety and stress, and make patients feel comfortable and secure. This paper describes a review of the literature exploring sound perception and its effect on health care. This review sorted the literature and main issues into themes concerning sound in health care spaces; sound, stress and health; positive soundscape; psychological perspective of music and emotion; music as a complementary medicine for improving health care; contradicting arguments concerning the use of music in health care; and implications for clinical practice. Using Web of Science, PubMed, Scopus, ProQuest Central, MEDLINE, and Google, a literature search on sound levels, sound sources and the impression of a soundscape was conducted. The review focused on the role and use of music on health care in clinical environments. In addition, other pertinent related materials in shaping the understanding of the field were retrieved, scanned and added into this review. The result indicated that not all noises give a negative impression within healthcare soundscapes. Listening to soothing music was shown to reduce stress, blood pressure and post-operative trauma when compared to silence. Much of the sound conveys meaningful information that is positive for both patients and nurses, in terms of soft wind, bird twitter, and ocean sounds. Music perception was demonstrated to bring about positive change in patient-reported outcomes such as eliciting
Olsen, Kirk N.; Dean, Roger T.; Leung, Yvonne
2016-01-01
Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event ‘hazard’ analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral
Demonstrating Sound with Music Production Software
ERIC Educational Resources Information Center
Keeports, David
2010-01-01
Readily available software designed for the production of music can be adapted easily to the physics classroom. Programs such as Apple's GarageBand access large libraries of recorded sound waves that can be heard and displayed both before and after alterations. Tools such as real-time spectral analysers, digital effects, and audio file editors…
Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions
Reybrouck, Mark; Eerola, Tuomas
2017-01-01
The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning. The theoretical grounding elaborates on the transition from referential to affective semantics, the distinction between expression and induction of emotions, and the tension between discrete-digital and analog-continuous processing of the sounds. The empirical background provides evidence from several findings such as infant-directed speech, referential emotive vocalizations and separation calls in lower mammals, the distinction between the acoustic and vehicle mode of sound perception, and the bodily and physiological reactions to the sounds. It is argued, finally, that early affective processing reflects the way emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. As such there is a dynamic tension between nature and nurture, which is reflected in the nature-nurture-nature cycle of musical sense-making. PMID:28421015
Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions.
Reybrouck, Mark; Eerola, Tuomas
2017-01-01
The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning. The theoretical grounding elaborates on the transition from referential to affective semantics, the distinction between expression and induction of emotions, and the tension between discrete-digital and analog-continuous processing of the sounds. The empirical background provides evidence from several findings such as infant-directed speech, referential emotive vocalizations and separation calls in lower mammals, the distinction between the acoustic and vehicle mode of sound perception, and the bodily and physiological reactions to the sounds. It is argued, finally, that early affective processing reflects the way emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. As such there is a dynamic tension between nature and nurture, which is reflected in the nature-nurture-nature cycle of musical sense-making.
Influence of Music on the Behaviors of Crowd in Urban Open Public Spaces
Meng, Qi; Zhao, Tingting; Kang, Jian
2018-01-01
Sound environment plays an important role in urban open spaces, yet studies on the effects of perception of the sound environment on crowd behaviors have been limited. The aim of this study, therefore, is to explore how music, which is considered an important soundscape element, affects crowd behaviors in urban open spaces. On-site observations were performed at a 100 m × 70 m urban leisure square in Harbin, China. Typical music was used to study the effects of perception of the sound environment on crowd behaviors; then, these behaviors were classified into movement (passing by and walking around) and non-movement behaviors (sitting). The results show that the path of passing by in an urban leisure square with music was more centralized than without music. Without music, 8.3% of people passing by walked near the edge of the square, whereas with music, this percentage was zero. In terms of the speed of passing by behavior, no significant difference was observed with the presence or absence of background music. Regarding the effect of music on walking around behavior in the square, the mean area and perimeter when background music was played were smaller than without background music. The mean speed of those exhibiting walking around behavior with background music in the square was 0.296 m/s slower than when no background music was played. For those exhibiting sitting behavior, when background music was not present, crowd density showed no variation based on the distance from the sound source. When music was present, it was observed that as the distance from the sound source increased, crowd density of those sitting behavior decreased accordingly. PMID:29755390
Influence of Music on the Behaviors of Crowd in Urban Open Public Spaces.
Meng, Qi; Zhao, Tingting; Kang, Jian
2018-01-01
Sound environment plays an important role in urban open spaces, yet studies on the effects of perception of the sound environment on crowd behaviors have been limited. The aim of this study, therefore, is to explore how music, which is considered an important soundscape element, affects crowd behaviors in urban open spaces. On-site observations were performed at a 100 m × 70 m urban leisure square in Harbin, China. Typical music was used to study the effects of perception of the sound environment on crowd behaviors; then, these behaviors were classified into movement (passing by and walking around) and non-movement behaviors (sitting). The results show that the path of passing by in an urban leisure square with music was more centralized than without music. Without music, 8.3% of people passing by walked near the edge of the square, whereas with music, this percentage was zero. In terms of the speed of passing by behavior, no significant difference was observed with the presence or absence of background music. Regarding the effect of music on walking around behavior in the square, the mean area and perimeter when background music was played were smaller than without background music. The mean speed of those exhibiting walking around behavior with background music in the square was 0.296 m/s slower than when no background music was played. For those exhibiting sitting behavior, when background music was not present, crowd density showed no variation based on the distance from the sound source. When music was present, it was observed that as the distance from the sound source increased, crowd density of those sitting behavior decreased accordingly.
General Music Teachers' Backgrounds and Multicultural Repertoire Selection
ERIC Educational Resources Information Center
Lee, Soojin
2018-01-01
The purpose of this qualitative study was to examine how teachers' backgrounds could contribute to their decisions to include music from diverse cultures. Analysis of interviews with three general music teachers indicated that their music training and experiences, ethnic backgrounds, and years of teaching experience may have influenced their…
A neurally inspired musical instrument classification system based upon the sound onset.
Newton, Michael J; Smith, Leslie S
2012-06-01
Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.
Using Music as a Background for Reading: An Exploratory Study.
ERIC Educational Resources Information Center
Mulliken, Colleen N.; Henk, William A.
1985-01-01
Reports on a study during which intermediate level students were exposed to three auditory backgrounds while reading (no music, classical music, and rock music), and their subsequent comprehension performance was measured. Concludes that the auditory background during reading may affect comprehension and that, for most students, rock music should…
When listening to rain sounds boosts arithmetic ability
De Benedetto, Francesco; Ferrari, Maria Vittoria; Ferrarini, Giorgia
2018-01-01
Studies in the literature have provided conflicting evidence about the effects of background noise or music on concurrent cognitive tasks. Some studies have shown a detrimental effect, while others have shown a beneficial effect of background auditory stimuli. The aim of this study was to investigate the influence of agitating, happy or touching music, as opposed to environmental sounds or silence, on the ability of non-musician subjects to perform arithmetic operations. Fifty university students (25 women and 25 men, 25 introverts and 25 extroverts) volunteered for the study. The participants were administered 180 easy or difficult arithmetic operations (division, multiplication, subtraction and addition) while listening to heavy rain sounds, silence or classical music. Silence was detrimental when participants were faced with difficult arithmetic operations, as it was associated with significantly worse accuracy and slower RTs than music or rain sound conditions. This finding suggests that the benefit of background stimulation was not music-specific but possibly due to an enhanced cerebral alertness level induced by the auditory stimulation. Introverts were always faster than extroverts in solving mathematical problems, except when the latter performed calculations accompanied by the sound of heavy rain, a condition that made them as fast as introverts. While the background auditory stimuli had no effect on the arithmetic ability of either group in the easy condition, it strongly affected extroverts in the difficult condition, with RTs being faster during agitating or joyful music as well as rain sounds, compared to the silent condition. For introverts, agitating music was associated with faster response times than the silent condition. This group difference may be explained on the basis of the notion that introverts have a generally higher arousal level compared to extroverts and would therefore benefit less from the background auditory stimuli. PMID:29466472
When listening to rain sounds boosts arithmetic ability.
Proverbio, Alice Mado; De Benedetto, Francesco; Ferrari, Maria Vittoria; Ferrarini, Giorgia
2018-01-01
Studies in the literature have provided conflicting evidence about the effects of background noise or music on concurrent cognitive tasks. Some studies have shown a detrimental effect, while others have shown a beneficial effect of background auditory stimuli. The aim of this study was to investigate the influence of agitating, happy or touching music, as opposed to environmental sounds or silence, on the ability of non-musician subjects to perform arithmetic operations. Fifty university students (25 women and 25 men, 25 introverts and 25 extroverts) volunteered for the study. The participants were administered 180 easy or difficult arithmetic operations (division, multiplication, subtraction and addition) while listening to heavy rain sounds, silence or classical music. Silence was detrimental when participants were faced with difficult arithmetic operations, as it was associated with significantly worse accuracy and slower RTs than music or rain sound conditions. This finding suggests that the benefit of background stimulation was not music-specific but possibly due to an enhanced cerebral alertness level induced by the auditory stimulation. Introverts were always faster than extroverts in solving mathematical problems, except when the latter performed calculations accompanied by the sound of heavy rain, a condition that made them as fast as introverts. While the background auditory stimuli had no effect on the arithmetic ability of either group in the easy condition, it strongly affected extroverts in the difficult condition, with RTs being faster during agitating or joyful music as well as rain sounds, compared to the silent condition. For introverts, agitating music was associated with faster response times than the silent condition. This group difference may be explained on the basis of the notion that introverts have a generally higher arousal level compared to extroverts and would therefore benefit less from the background auditory stimuli.
Neuroplasticity beyond Sounds: Neural Adaptations Following Long-Term Musical Aesthetic Experiences
Reybrouck, Mark; Brattico, Elvira
2015-01-01
Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active “agent” coping in highly individual ways with the sounds. The findings concerning the neural adaptations in musicians, following long-term exposure to music, are then reviewed by keeping in mind the distinct subprocesses of a musical aesthetic experience. We conclude that these neural adaptations can be conceived of as the immediate and lifelong interactions with multisensorial stimuli (having a predominant auditory component), which result in lasting changes of the internal state of the “agent”. In a continuous loop, these changes affect, in turn, the subprocesses involved in a musical aesthetic experience, towards the final goal of achieving better perceptual, motor and proprioceptive responses to the immediate demands of the sounding environment. The resulting neural adaptations in musicians closely depend on the duration of the interactions, the starting age, the involvement of attention, the amount of motor practice and the musical genre played. PMID:25807006
Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna
2016-07-01
Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Giordano, Bruno L.; Egermann, Hauke; Bresin, Roberto
2014-01-01
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions. PMID:25551392
Giordano, Bruno L; Egermann, Hauke; Bresin, Roberto
2014-01-01
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.
ERIC Educational Resources Information Center
James, Alan Russell
2000-01-01
Using music in the classroom enhances learning. Music and dance provide an opportunity for positive social interaction. Singing fosters understanding of the sound and rhythm of language. Exposing children to the patterns of different kinds of music helps them to recognize patterns in mathematics. Background music in the classroom reduces stress…
An Analysis of Sound Exposure in a University Music Rehearsal
ERIC Educational Resources Information Center
Farmer, Joe; Thrasher, Michael; Fumo, Nelson
2014-01-01
Exposure to high sound levels may lead to a variety of hearing abnormalities, including Noise-Induced Hearing Loss (NIHL). Pre-professional university music majors may experience frequent exposure to elevated sound levels, and this may have implications on their future career prospects (Jansen, Helleman, Dreschler & de Laat, 2009). Studies…
Harmonic Frequency Lowering: Effects on the Perception of Music Detail and Sound Quality.
Kirchberger, Martin; Russo, Frank A
2016-02-01
A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions. © The Author(s) 2016.
Affect induction through musical sounds: an ethological perspective
Huron, David
2015-01-01
How does music induce or evoke feeling states in listeners? A number of mechanisms have been proposed for how sounds induce emotions, including innate auditory responses, learned associations and mirror neuron processes. Inspired by ethology, it is suggested that the ethological concepts of signals, cues and indices offer additional analytic tools for better understanding induced affect. It is proposed that ethological concepts help explain why music is able to induce only certain emotions, why some induced emotions are similar to the displayed emotion (whereas other induced emotions differ considerably from the displayed emotion), why listeners often report feeling mixed emotions and why only some musical expressions evoke similar responses across cultures. PMID:25646521
Musical sound analysis/synthesis using vector-quantized time-varying spectra
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Beauchamp, James W.
2002-11-01
A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.
Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-01-01
Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550
The Sound Path: Adding Music to a Child Care Playground.
ERIC Educational Resources Information Center
Kern, Petra; Wolery, Mark
2002-01-01
This article discusses how musical activities were added to a childcare playground and the benefits for a young child with blindness. The six-station "Sound Path" is described, and suggestions are provided for using sound pipes to develop sensorimotor skills, social and communication skills, cognitive skills, and emotional skills. (Contains…
Piecing It Together: The Effect of Background Music on Children's Puzzle Assembly.
Koolidge, Louis; Holmes, Robyn M
2018-04-01
This study explored the effects of background music on cognitive (puzzle assembly) task performance in young children. Participants were 87 primarily European-American children (38 boys, 49 girls; mean age = 4.77 years) enrolled in early childhood classes in the northeastern United States. Children were given one minute to complete a 12-piece puzzle task in one of three background music conditions: music with lyrics, music without lyrics, and no music. The music selection was "You're Welcome" from the Disney movie "Moana." Results revealed that children who heard the music without lyrics completed more puzzle pieces than children in either the music with lyrics or no music condition. Background music without distracting lyrics may be beneficial and superior to background music with lyrics for young children's cognitive performance even when they are engaged independently in a nonverbal task.
The Association between Music Training, Background Music, and Adult Reading Comprehension
ERIC Educational Resources Information Center
Haning, Marshall
2016-01-01
The purpose of this research was to determine whether music training is correlated with increased reading comprehension skills in young adults. In addition, an attempt was made to replicate Patson and Tippett's (2011) finding that background music impairs language comprehension scores in musicians but not in nonmusicians. Participants with musical…
ERIC Educational Resources Information Center
Bartleet, Brydie-Leigh
2009-01-01
"Sound Links" examines the dynamics of community music in Australia, and the models it represents for informal music learning and teaching. This involves researching a selection of vibrant musical communities across the country, exploring their potential for complementarity and synergy with music in schools. This article focuses on the…
Affect induction through musical sounds: an ethological perspective.
Huron, David
2015-03-19
How does music induce or evoke feeling states in listeners? A number of mechanisms have been proposed for how sounds induce emotions, including innate auditory responses, learned associations and mirror neuron processes. Inspired by ethology, it is suggested that the ethological concepts of signals, cues and indices offer additional analytic tools for better understanding induced affect. It is proposed that ethological concepts help explain why music is able to induce only certain emotions, why some induced emotions are similar to the displayed emotion (whereas other induced emotions differ considerably from the displayed emotion), why listeners often report feeling mixed emotions and why only some musical expressions evoke similar responses across cultures. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Neural responses to sounds presented on and off the beat of ecologically valid music
Tierney, Adam; Kraus, Nina
2013-01-01
The tracking of rhythmic structure is a vital component of speech and music perception. It is known that sequences of identical sounds can give rise to the percept of alternating strong and weak sounds, and that this percept is linked to enhanced cortical and oscillatory responses. The neural correlates of the perception of rhythm elicited by ecologically valid, complex stimuli, however, remain unexplored. Here we report the effects of a stimulus' alignment with the beat on the brain's processing of sound. Human subjects listened to short popular music pieces while simultaneously hearing a target sound. Cortical and brainstem electrophysiological onset responses to the sound were enhanced when it was presented on the beat of the music, as opposed to shifted away from it. Moreover, the size of the effect of alignment with the beat on the cortical response correlated strongly with the ability to tap to a beat, suggesting that the ability to synchronize to the beat of simple isochronous stimuli and the ability to track the beat of complex, ecologically valid stimuli may rely on overlapping neural resources. These results suggest that the perception of musical rhythm may have robust effects on processing throughout the auditory system. PMID:23717268
"Sounds of Intent in the Early Years": A Proposed Framework of Young Children's Musical Development
ERIC Educational Resources Information Center
Voyajolu, Angela; Ockelford, Adam
2016-01-01
"Sounds of Intent in the Early Years" explores the musical development of children from birth to five years of age. Observational evidence has been utilised together with key literature on musical development and core concepts of zygonic theory (Ockelford, 2013) to investigate the applicability of the original "Sounds of…
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
The Effect of Music and Sound Effects on the Listening Comprehension of Fourth Grade Students.
ERIC Educational Resources Information Center
Mann, Raymond E.
This study was designed to determine if the addition of music and sound effects to recorded stories increased the comprehension and retention of information presented on tape to fourth grade students at three levels of reading ability. Two versions of four narrated stories were recorded, one version with music and sound effects, the other with…
Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J
2015-05-01
Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.
Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna
2015-01-01
When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content.
Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna
2016-01-01
When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content. PMID:26779055
Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter
2012-01-01
Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception. PMID:22242169
Villarreal, Eduardo A Garza; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter
2012-01-01
Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.
Cheng, Tzu-Han; Tsai, Chen-Gia
2016-01-01
Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners' respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV) was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners' respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listeners' heart rate in response to the following music passage. These findings have potential implications for future research on the temporal dynamics of musical emotions.
Cheng, Tzu-Han; Tsai, Chen-Gia
2016-01-01
Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners’ respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV) was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners’ respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listeners’ heart rate in response to the following music passage. These findings have potential implications for future research on the temporal dynamics of musical emotions. PMID:26925009
The sound exposure of the audience at a music festival.
Mercier, V; Luy, D; Hohmann, B W
2003-01-01
During the Paleo Festival in Nyon, Switzerland, which took place from 24th to 29th July 2001, ten volunteers were equipped each evening with small sound level meters which continuously monitored their sound exposure as they circulated among the various festival events. Sound levels at the mixing console and at the place where people are most heavily exposed (in front of the speakers) were measured simultaneously. In addition, a sample of 601 people from the audience were interviewed over the six days of the festival and asked their opinion of sound level and quality, as well as provide details of where in the arena they preferred to listen to the concerts, whether they used ear plugs, if they had experienced any tinnitus, and if so how long it had persisted. The individual sound exposure during a typical evening was on average 95 dB(A) although 8% of the volunteers were exposed to sound levels higher then 100 dB(A). Only 5% of the audience wore ear plugs throughout the concert while 34% used them occasionally. While some 36% of the people interviewed reported that they had experienced tinnitus after listening to loud music, the majority found both the music quality and the sound level good. The sound level limit of 100 dB(A) at the place where the people are most heavily exposed seems to be a good compromise between the public heath issue, the demands of artists and organisers, and the expectations of the public. However, considering the average sound levels to which the public are exposed during a single evening, it is recommended that ear plugs be used by concert-goers who attend more than one day of the festival.
The association of noise sensitivity with music listening, training, and aptitude.
Kliuchko, Marina; Heinonen-Guzejev, Marja; Monacis, Lucia; Gold, Benjamin P; Heikkilä, Kauko V; Spinosa, Vittoria; Tervaniemi, Mari; Brattico, Elvira
2015-01-01
After intensive, long-term musical training, the auditory system of a musician is specifically tuned to perceive musical sounds. We wished to find out whether a musician's auditory system also develops increased sensitivity to any sound of everyday life, experiencing them as noise. For this purpose, an online survey, including questionnaires on noise sensitivity, musical background, and listening tests for assessing musical aptitude, was administered to 197 participants in Finland and Italy. Subjective noise sensitivity (assessed with the Weinstein's Noise Sensitivity Scale) was analyzed for associations with musicianship, musical aptitude, weekly time spent listening to music, and the importance of music in each person's life (or music importance). Subjects were divided into three groups according to their musical expertise: Nonmusicians (N = 103), amateur musicians (N = 44), and professional musicians (N = 50). The results showed that noise sensitivity did not depend on musical expertise or performance on musicality tests or the amount of active (attentive) listening to music. In contrast, it was associated with daily passive listening to music, so that individuals with higher noise sensitivity spent less time in passive (background) listening to music than those with lower sensitivity to noise. Furthermore, noise-sensitive respondents rated music as less important in their life than did individuals with lower sensitivity to noise. The results demonstrate that the special sensitivity of the auditory system derived from musical training does not lead to increased irritability from unwanted sounds. However, the disposition to tolerate contingent musical backgrounds in everyday life depends on the individual's noise sensitivity.
The association of noise sensitivity with music listening, training, and aptitude
Kliuchko, Marina; Heinonen-Guzejev, Marja; Monacis, Lucia; Gold, Benjamin P.; Heikkilä, Kauko V.; Spinosa, Vittoria; Tervaniemi, Mari; Brattico, Elvira
2015-01-01
After intensive, long-term musical training, the auditory system of a musician is specifically tuned to perceive musical sounds. We wished to find out whether a musician's auditory system also develops increased sensitivity to any sound of everyday life, experiencing them as noise. For this purpose, an online survey, including questionnaires on noise sensitivity, musical background, and listening tests for assessing musical aptitude, was administered to 197 participants in Finland and Italy. Subjective noise sensitivity (assessed with the Weinstein's Noise Sensitivity Scale) was analyzed for associations with musicianship, musical aptitude, weekly time spent listening to music, and the importance of music in each person's life (or music importance). Subjects were divided into three groups according to their musical expertise: Nonmusicians (N = 103), amateur musicians (N = 44), and professional musicians (N = 50). The results showed that noise sensitivity did not depend on musical expertise or performance on musicality tests or the amount of active (attentive) listening to music. In contrast, it was associated with daily passive listening to music, so that individuals with higher noise sensitivity spent less time in passive (background) listening to music than those with lower sensitivity to noise. Furthermore, noise-sensitive respondents rated music as less important in their life than did individuals with lower sensitivity to noise. The results demonstrate that the special sensitivity of the auditory system derived from musical training does not lead to increased irritability from unwanted sounds. However, the disposition to tolerate contingent musical backgrounds in everyday life depends on the individual's noise sensitivity. PMID:26356378
Does background music in a store enhance salespersons' persuasiveness?
Cebat, J C; Vaillant, D; Gélinas-Chebat, C
2000-10-01
Background music has been studied as a key element of the store atmosphere in terms of its emotional effects; however, previous studies have shown also that music may have cognitive influence on consumers. How does music affect the salespersons' persuasive efforts within the store? To answer this question an experimental study was designed to assess the effects of four levels of arousing music conditions (no-low-moderate high arousing music). The level of pleasure of the musical pieces was controlled for. Music does not moderate significantly the effects of the salespersons on the intent to buy, but low and moderately arousing music (similarly low and moderately interesting musical pieces) does influence significantly the effects on the acceptance of the salesperson's arguments and the "desire to affiliate," i.e., to enter into communication.
DISCO: An object-oriented system for music composition and sound design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wright, J. M.
2000-09-05
This paper describes an object-oriented approach to music composition and sound design. The approach unifies the processes of music making and instrument building by using similar logic, objects, and procedures. The composition modules use an abstract representation of musical data, which can be easily mapped onto different synthesis languages or a traditionally notated score. An abstract base class is used to derive classes on different time scales. Objects can be related to act across time scales, as well as across an entire piece, and relationships between similar objects can replicate traditional music operations or introduce new ones. The DISCO (Digitalmore » Instrument for Sonification and Composition) system is an open-ended work in progress.« less
ERIC Educational Resources Information Center
Kardos, Leah
2012-01-01
I am a composer, producer, pianist and part-time music lecturer at a Further Education college where I teach composing on Music Technology courses at levels 3 (equivalent to A-level) and 4 (Undergraduate/Foundation Degree). A "Music Technology" course, distinct from a "Music" course, often attracts applicants from diverse musical backgrounds; it…
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
Fast and Loud Background Music Disrupts Reading Comprehension
ERIC Educational Resources Information Center
Thompson, William Forde; Schellenberg, E. Glenn; Letnic, Adriana Katharine
2012-01-01
We examined the effect of background music on reading comprehension. Because the emotional consequences of music listening are affected by changes in tempo and intensity, we manipulated these variables to create four repeated-measures conditions: slow/low, slow/high, fast/low, fast/high. Tempo and intensity manipulations were selected to be…
NASA Astrophysics Data System (ADS)
Iachimciuc, Igor
The dissertation is in two parts, a theoretical study and a musical composition. In Part I the music of Gyorgy Kurtag is analyzed from the point of view of sound color. A brief description of what is understood by the term sound color, and various ways of achieving specific coloristic effects, are presented in the Introduction. An examination of Kurtag's approaches to the domain of sound color occupies the chapters that follow. The musical examples that are analyzed are selected from Kurtag's different compositional periods, showing a certain consistency in sound color techniques, the most important of which are already present in the String Quartet, Op. 1. The compositions selected for analysis are written for different ensembles, but regardless of the instrumentation, certain principles of the formation and organization of sound color remain the same. Rather than relying on extended instrumental techniques, Kurtag creates a large variety of sound colors using traditional means such as pitch material, register, density, rhythm, timbral combinations, dynamics, texture, spatial displacement of the instruments, and the overall musical context. Each sound color unit in Kurtag's music is a separate entity, conceived as a complete microcosm. Sound color units can either be juxtaposed as contrasting elements, forming sound color variations, or superimposed, often resulting in a Klangfarbenmelodie effect. Some of the same gestural figures (objets trouves) appear in different compositions, but with significant coloristic modifications. Thus, the principle of sound color variations is not only a strong organizational tool, but also a characteristic stylistic feature of the music of Gyorgy Kurtag. Part II, Leopard's Path (2010), for flute, clarinet, violin, cello, cimbalom, and piano, is an original composition inspired by the painting of Jesse Allen, a San Francisco based artist. The composition is conceived as a cycle of thirteen short movements. Ten of these movements are
Background Music in Educational Games: Motivational Appeal and Cognitive Impact
ERIC Educational Resources Information Center
Linek, Stephanie B.; Marte, Birgit; Albert, Dietrich
2011-01-01
Most game-designers likely stick to the assumption that background music is a design feature for fostering fun and game play. From a psychological point of view, these (intuitive) aspects act upon the intrinsic motivation and the flow experience of players. However, from a pure cognitive perspective on instructional design, background music could…
Kraus, Nina; Slater, Jessica; Thompson, Elaine C.; Hornickel, Jane; Strait, Dana L.; Nicol, Trent; White-Schwoch, Travis
2014-01-01
The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the
Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis
2014-01-01
The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the
The sound of friction: Real-time models, playability and musical applications
NASA Astrophysics Data System (ADS)
Serafin, Stefania
Friction, the tangential force between objects in contact, in most engineering applications needs to be removed as a source of noise and instabilities. In musical applications, friction is a desirable component, being the sound production mechanism of different musical instruments such as bowed strings, musical saws, rubbed bowls and any other sonority produced by interactions between rubbed dry surfaces. The goal of the dissertation is to simulate different instrument whose main excitation mechanism is friction. An efficient yet accurate model of a bowed string instrument, which combines the latest results in violin acoustics with the efficient digital waveguide approach, is provided. In particular, the bowed string physical model proposed uses a thermodynamic friction model in which the finite width of the bow is taken into account; this solution is compared to the recently developed elasto-plastic friction models used in haptics and robotics. Different solutions are also proposed to model the body of the instrument. Other less common instruments driven by friction are also proposed, and the elasto-plastic model is used to provide audio-visual simulations of everyday friction sounds such as squeaking doors and rubbed wine glasses. Finally, playability evaluations and musical applications in which the models have been used are discussed.
Accuracy of cochlear implant recipients in speech reception in the presence of background music.
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-12-01
This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.
Roy, Alexis T; Penninger, Richard T; Pearl, Monica S; Wuerfel, Waldemar; Jiradejvong, Patpong; Carver, Courtney; Buechner, Andreas; Limb, Charles J
2016-02-01
Cochlear implant (CI) electrode arrays typically do not reach the most apical regions of the cochlea that intrinsically encode low frequencies. This may contribute to diminished implant-mediated musical sound quality perception. The objective of this study was to assess the effect of varying degrees of apical cochlear stimulation (measured by angular insertion depth) on musical sound quality discrimination. Increased apical cochlear stimulation will improve low-frequency perception and musical sound quality discrimination. Standard (31.5 mm, n = 17) and medium (24 mm, n = 8) array Med-EL CI users, and normal hearing (NH) listeners (n = 16) participated. Imaging confirmed angular insertion depth. Participants completed a musical discrimination task in which they listened to a real-world musical stimulus (labeled reference) and provided sound quality ratings to versions of the reference, which included a hidden reference and test stimuli with increasing amounts of low-frequency removal. Scores for each CI users were calculated on the basis of how much their ratings differed from NH listeners for each stimulus version. Medium array and standard users had significantly different insertion depths (389.4 ± 64.5 and 583.9 ± 78.5 degrees, respectively; p < .001). A significant Pearson's correlation was observed between angular insertion depth and the hidden reference scores (p < 0.05). CI users with greater apical stimulation made sound quality discriminations that more closely resembled those of NH controls for stimuli that contained low frequencies (< 200 Hz of information). These findings suggest that increased apical cochlear stimulation improves musical low-frequency perception, which may provide a more satisfactory music listening experience for CI users.
The effect of musical practice on gesture/sound pairing.
Proverbio, Alice M; Attardo, Lapo; Cozzi, Matteo; Zani, Alberto
2015-01-01
Learning to play a musical instrument is a demanding process requiring years of intense practice. Dramatic changes in brain connectivity, volume, and functionality have been shown in skilled musicians. It is thought that music learning involves the formation of novel audio visuomotor associations, but not much is known about the gradual acquisition of this ability. In the present study, we investigated whether formal music training enhances audiovisual multisensory processing. To this end, pupils at different stages of education were examined based on the hypothesis that the strength of audio/visuomotor associations would be augmented as a function of the number of years of conservatory study (expertise). The study participants were violin and clarinet students of pre-academic and academic levels and of different chronological ages, ages of acquisition, and academic levels. A violinist and a clarinetist each played the same score, and each participant viewed the video corresponding to his or her instrument. Pitch, intensity, rhythm, and sound duration were matched across instruments. In half of the trials, the soundtrack did not match (in pitch) the corresponding musical gestures. Data analysis indicated a correlation between the number of years of formal training (expertise) and the ability to detect an audiomotor incongruence in music performance (relative to the musical instrument practiced), thus suggesting a direct correlation between knowing how to play and perceptual sensitivity.
37 CFR 255.8 - Public performances of sound recordings and musical works.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES ADJUSTMENT OF ROYALTY PAYABLE... sound recording or the musical work embodied therein, including by means of a digital transmission...
37 CFR 255.8 - Public performances of sound recordings and musical works.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES ADJUSTMENT OF ROYALTY PAYABLE... sound recording or the musical work embodied therein, including by means of a digital transmission...
37 CFR 255.8 - Public performances of sound recordings and musical works.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES ADJUSTMENT OF ROYALTY PAYABLE... sound recording or the musical work embodied therein, including by means of a digital transmission...
37 CFR 255.8 - Public performances of sound recordings and musical works.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES ADJUSTMENT OF ROYALTY PAYABLE... sound recording or the musical work embodied therein, including by means of a digital transmission...
37 CFR 255.8 - Public performances of sound recordings and musical works.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES ADJUSTMENT OF ROYALTY PAYABLE... sound recording or the musical work embodied therein, including by means of a digital transmission...
ERIC Educational Resources Information Center
Zdzinski, Stephen; Dell, Charlene; Gumm, Alan; Rinnert, Nathan; Orzolek, Douglas; Yap, Ching Ching; Cooper, Shelly; Keith, Timothy; Russell, Brian
2015-01-01
The purpose of this study was to examine influences of parental involvement-home music environment, family background, and parenting style factors on success in school music and in school. Participants (N = 1114) were music students in grades 4-12 from six regions of the United States. Data were gathered about parental involvement-home environment…
When music is salty: The crossmodal associations between sound and taste.
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.
When music is salty: The crossmodal associations between sound and taste
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population. PMID:28355227
Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts
ERIC Educational Resources Information Center
Delalande, Francois; Cornara, Silvia
2010-01-01
One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…
On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
Weninger, Felix; Eyben, Florian; Schuller, Björn W.; Mortillaro, Marcello; Scherer, Klaus R.
2013-01-01
Without doubt, there is emotional information in almost any kind of sound received by humans every day: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow’s pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of “the sound that something makes,” in order to evaluate the system’s auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects. PMID:23750144
On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common.
Weninger, Felix; Eyben, Florian; Schuller, Björn W; Mortillaro, Marcello; Scherer, Klaus R
2013-01-01
WITHOUT DOUBT, THERE IS EMOTIONAL INFORMATION IN ALMOST ANY KIND OF SOUND RECEIVED BY HUMANS EVERY DAY: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow's pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of "the sound that something makes," in order to evaluate the system's auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.
Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y
1997-09-01
This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.
Evangelista, Kevin; Macabasag, Romeo Luis A; Capili, Brylle; Castro, Timothy; Danque, Marilee; Evangelista, Hanzel; Rivero, Jenica Ana; Gonong, Michell Katrina; Diño, Michael Joseph; Cajayon, Sharon
2017-10-28
Previous work on the use of background music suggests conflicting results in various psychological, behavioral, and educational measures. This quasi-experiment examined the effect of integrating classical background music during a lecture on stress, anxiety, and knowledge. A total of 42 nursing students participated this study. We utilized independent sample t-test and multivariate analysis of variance to examine the effect of classical background music. Our findings suggest that the presence or absence of classical background music do not affect stress, anxiety, and knowledge scores (Λ = 0.999 F(3, 78) = 0.029, p = 0.993). We provided literature to explain the non-significant result. Although classical music failed to establish a significant influence on the dependent variables, classical background music during lecture hours can be considered a non-threatening stimulus. We recommend follow up studies regarding the role of classical background music in regulating attention control of nursing students during lecture hours.
ERIC Educational Resources Information Center
Hirokawa, Joy Ondra
2013-01-01
The purpose of this research was to examine the differences in the evaluations of music teachers conducted by individuals with varying backgrounds in music and observation techniques. Part I compared evaluations completed by school administrators and music department leadership. Part II utilized the findings of Part I to create focused and…
The Impact of Background Music on Adult Listeners: A Meta-Analysis
ERIC Educational Resources Information Center
Kampfe, Juliane; Sedlmeier, Peter; Renkewitz, Frank
2011-01-01
Background music has been found to have beneficial, detrimental, or no effect on a variety of behavioral and psychological outcome measures. This article reports a meta-analysis that attempts to summarize the impact of background music. A global analysis shows a null effect, but a detailed examination of the studies that allow the calculation of…
Nonlinear frequency compression: effects on sound quality ratings of speech and music.
Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas
2013-03-01
Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.
Mercadíe, Lolita; Mick, Gérard; Guétin, Stéphane; Bigand, Emmanuel
2015-10-01
In fibromyalgia, pain symptoms such as hyperalgesia and allodynia are associated with fatigue. Mechanisms underlying such symptoms can be modulated by listening to pleasant music. We expected that listening to music, because of its emotional impact, would have a greater modulating effect on the perception of pain and fatigue in patients with fibromyalgia than listening to nonmusical sounds. To investigate this hypothesis, we carried out a 4-week study in which patients with fibromyalgia listened to either preselected musical pieces or environmental sounds when they experienced pain in active (while carrying out a physical activity) or passive (at rest) situations. Concomitant changes of pain and fatigue levels were evaluated. When patients listened to music or environmental sounds at rest, pain and fatigue levels were significantly reduced after 20 minutes of listening, with no difference of effect magnitude between the two stimuli. This improvement persisted 10 minutes after the end of the listening session. In active situations, pain did not increase in presence of the two stimuli. Contrary to our expectations, music and environmental sounds produced a similar relieving effect on pain and fatigue, with no benefit gained by listening to pleasant music over environmental sounds. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Bottiroli, Sara; Rosi, Alessia; Russo, Riccardo; Vecchi, Tomaso; Cavallini, Elena
2014-01-01
Background music refers to any music played while the listener is performing another activity. Most studies on this effect have been conducted on young adults, while little attention has been paid to the presence of this effect in older adults. Hence, this study aimed to address this imbalance by assessing the impact of different types of background music on cognitive tasks tapping declarative memory and processing speed in older adults. Overall, background music tended to improve performance over no music and white noise, but not always in the same manner. The theoretical and practical implications of the empirical findings are discussed.
Bottiroli, Sara; Rosi, Alessia; Russo, Riccardo; Vecchi, Tomaso; Cavallini, Elena
2014-01-01
Background music refers to any music played while the listener is performing another activity. Most studies on this effect have been conducted on young adults, while little attention has been paid to the presence of this effect in older adults. Hence, this study aimed to address this imbalance by assessing the impact of different types of background music on cognitive tasks tapping declarative memory and processing speed in older adults. Overall, background music tended to improve performance over no music and white noise, but not always in the same manner. The theoretical and practical implications of the empirical findings are discussed. PMID:25360112
Sound and vision: visualization of music with a soap film
NASA Astrophysics Data System (ADS)
Gaulon, C.; Derec, C.; Combriat, T.; Marmottant, P.; Elias, F.
2017-07-01
A vertical soap film, freely suspended at the end of a tube, is vibrated by a sound wave that propagates in the tube. If the sound wave is a piece of music, the soap film ‘comes alive’: colours, due to iridescences in the soap film, swirl, split and merge in time with the music (see the snapshots in figure 1 below). In this article, we analyse the rich physics behind these fascinating dynamical patterns: it combines the acoustic propagation in a tube, the light interferences, and the static and dynamic properties of soap films. The interaction between the acoustic wave and the liquid membrane results in capillary waves on the soap film, as well as non-linear effects leading to a non-oscillatory flow of liquid in the plane of the film, which induces several spectacular effects: generation of vortices, diphasic dynamical patterns inside the film, and swelling of the soap film under certain conditions. Each of these effects is associated with a characteristic time scale, which interacts with the characteristic time of the music play. This article shows the richness of those characteristic times that lead to dynamical patterns. Through its artistic interest, the experiments presented in this article provide a tool for popularizing and demonstrating science in the classroom or to a broader audience.
Background music genre can modulate flavor pleasantness and overall impression of food stimuli.
Fiegel, Alexandra; Meullenet, Jean-François; Harrington, Robert J; Humble, Rachel; Seo, Han-Seok
2014-05-01
This study aimed to determine whether background music genre can alter food perception and acceptance, but also to determine how the effect of background music can vary as a function of type of food (emotional versus non-emotional foods) and source of music performer (single versus multiple performers). The music piece was edited into four genres: classical, jazz, hip-hop, and rock, by either a single or multiple performers. Following consumption of emotional (milk chocolate) or non-emotional food (bell peppers) with the four musical stimuli, participants were asked to rate sensory perception and impression of food stimuli. Participants liked food stimuli significantly more while listening to the jazz stimulus than the hip-hop stimulus. Further, the influence of background music on overall impression was present in the emotional food, but not in the non-emotional food. In addition, flavor pleasantness and overall impression of food stimuli differed between music genres arranged by a single performer, but not between those by multiple performers. In conclusion, our findings demonstrate that music genre can alter flavor pleasantness and overall impression of food stimuli. Furthermore, the influence of music genre on food acceptance varies as a function of the type of served food and the source of music performer. Published by Elsevier Ltd.
Stelling-Konczak, A; van Wee, G P; Commandeur, J J F; Hagenzieker, M
2017-09-01
Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe cycling. Furthermore, the study investigated the potential safety implications of limited auditory information caused by quiet (electric) cars and by cyclists listening to music or talking on the phone. An Internet survey among 2249 cyclists in three age groups (16-18, 30-40 and 65-70year old) was carried out to collect information on the following aspects: 1) the auditory perception of traffic sounds, including the sounds of quiet (electric) cars; 2) the possible compensatory behaviours of cyclists who listen to music or talk on their mobile phones; 3) the possible contribution of listening to music and talking on the phone to cycling crashes and incidents. Age differences with respect to those three aspects were analysed. Results show that listening to music and talking on the phone negatively affects perception of sounds crucial for safe cycling. However, taking into account the influence of confounding variables, no relationship was found between the frequency of listening to music or talking on the phone and the frequency of incidents among teenage cyclists. This may be due to cyclists' compensating for the use of portable devices. Listening to music or talking on the phone whilst cycling may still pose a risk in the absence of compensatory behaviour or in a traffic environment with less extensive and less safe cycling infrastructure than the Dutch setting. With the increasing number of quiet (electric) cars on the road, cyclists in the future may also need to compensate for the limited auditory input of these cars. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ziv, Naomi; Hoftman, Moran; Geyer, Mor
2012-01-01
Background music is often used in ads as a means of persuasion. Previous research has studied the effect of music in advertising using neutral or uncontroversial products. The aim of the studies reported here was to examine the effect of music on the perception of products promoting unethical behavior. Each of the series of three studies described…
Noseworthy, Theodore J; Finlay, Karen
2009-09-01
This research examined the effects of a casino's auditory character on estimates of elapsed time while gambling. More specifically, this study varied whether the sound heard while gambling was ambient casino sound alone or ambient casino sound accompanied by music. The tempo and volume of both the music and ambient sound were varied to manipulate temporal engagement and introspection. One hundred and sixty (males = 91) individuals played slot machines in groups of 5-8, after which they provided estimates of elapsed time. The findings showed that the typical ambient casino auditive environment, which characterizes the majority of gaming venues, promotes understated estimates of elapsed duration of play. In contrast, when music is introduced into the ambient casino environment, it appears to provide a cue of interval from which players can more accurately reconstruct elapsed duration of play. This is particularly the case when the tempo of the music is slow and the volume is high. Moreover, the confidence with which time estimates are held (as reflected by latency of response) is higher in an auditive environment with music than in an environment that is comprised of ambient casino sounds alone. Implications for casino management are discussed.
Parbery-Clark, Alexandra; Anderson, Samira; Hittner, Emily; Kraus, Nina
2012-01-01
Older adults frequently complain that while they can hear a person talking, they cannot understand what is being said; this difficulty is exacerbated by background noise. Peripheral hearing loss cannot fully account for this age-related decline in speech-in-noise ability, as declines in central processing also contribute to this problem. Given that musicians have enhanced speech-in-noise perception, we aimed to define the effects of musical experience on subcortical responses to speech and speech-in-noise perception in middle-aged adults. Results reveal that musicians have enhanced neural encoding of speech in quiet and noisy settings. Enhancements include faster neural response timing, higher neural response consistency, more robust encoding of speech harmonics, and greater neural precision. Taken together, we suggest that musical experience provides perceptual benefits in an aging population by strengthening the underlying neural pathways necessary for the accurate representation of important temporal and spectral features of sound. PMID:23189051
Musical expertise and foreign speech perception
Martínez-Montes, Eduardo; Hernández-Pérez, Heivet; Chobert, Julie; Morgado-Rodríguez, Lisbet; Suárez-Murias, Carlos; Valdés-Sosa, Pedro A.; Besson, Mireille
2013-01-01
The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN) design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT) or equivalent that were either far from (Large deviants) or close to (Small deviants) the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception are discussed. PMID:24294193
Musical expertise and foreign speech perception.
Martínez-Montes, Eduardo; Hernández-Pérez, Heivet; Chobert, Julie; Morgado-Rodríguez, Lisbet; Suárez-Murias, Carlos; Valdés-Sosa, Pedro A; Besson, Mireille
2013-01-01
The aim of this experiment was to investigate the influence of musical expertise on the automatic perception of foreign syllables and harmonic sounds. Participants were Cuban students with high level of expertise in music or in visual arts and with the same level of general education and socio-economic background. We used a multi-feature Mismatch Negativity (MMN) design with sequences of either syllables in Mandarin Chinese or harmonic sounds, both comprising deviants in pitch contour, duration and Voice Onset Time (VOT) or equivalent that were either far from (Large deviants) or close to (Small deviants) the standard. For both Mandarin syllables and harmonic sounds, results were clear-cut in showing larger MMNs to pitch contour deviants in musicians than in visual artists. Results were less clear for duration and VOT deviants, possibly because of the specific characteristics of the stimuli. Results are interpreted as reflecting similar processing of pitch contour in speech and non-speech sounds. The implications of these results for understanding the influence of intense musical training from childhood to adulthood and of genetic predispositions for music on foreign language perception are discussed.
Technology for the Sound of Music
NASA Technical Reports Server (NTRS)
1994-01-01
In the early 1960s during an industry recession, Kaman Aircraft lost several defense contracts. Forced to diversify, the helicopter manufacturer began to manufacture acoustic guitars. Kaman's engineers used special vibration analysis equipment based on aerospace technology. While a helicopter's rotor system is highly susceptible to vibration, which must be reduced or "dampened," vibration enhances a guitar's sound. After two years of vibration analysis Kaman produced an instrument, which is very successful. The Ovation guitar is made of fiberglass. It is stronger than the traditional rosewood and manufactured with adapted aircraft techniques such as jigs and fixtures, reducing labor and assuring quality and cost control. Kaman Music Corporation now has annual sales of $100 million.
Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; Macpherson, Megan; Weber-Fox, Christine
2013-04-01
Using electrophysiology, we have examined two questions in relation to musical training - namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 event-related potential component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We therefore conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians' accuracy tended to suffer less from the change in timbre of the sounds, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. © 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Residual neural processing of musical sound features in adult cochlear implant users.
Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias
2014-01-01
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants' attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients' age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. -Automatic brain responses to musical feature changes reflect the
Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users
Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias
2014-01-01
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes
Reexamination of mood-mediation hypothesis of background-music-dependent effects in free recall.
Isarida, Toshiko K; Kubota, Takayuki; Nakajima, Saki; Isarida, Takeo
2017-03-01
The present study reexamined the mood-mediation hypothesis for explaining background-music-dependent effects in free recall. Experiments 1 and 2 respectively examined tempo- and tonality-dependent effects in free recall, which had been used as evidence for the mood-mediation hypothesis. In Experiments 1 and 2, undergraduates (n = 75 per experiment) incidentally learned a list of 20 unrelated words presented one by one at a rate of 5 s per word and then received a 30-s delayed oral free-recall test. Throughout the study and test sessions, a piece of music was played. At the time of test, one third of the participants received the same piece of music with the same tempo or tonality as at study, one third heard a different piece with the same tempo or tonality, and one third heard a different piece with a different tempo or tonality. Note that the condition of the same piece with a different tempo or tonality was excluded. Furthermore, the number of sampled pieces of background music was increased compared with previous studies. The results showed neither tempo- nor tonality-dependent effects, but only a background-music-dependent effect. Experiment 3 (n = 40) compared the effects of background music with a verbal association task and focal music (only listening to musical selections) on the participants' moods. The results showed that both the music tempo and tonality influenced the corresponding mood dimensions (arousal and pleasantness). These results are taken as evidence against the mood-mediation hypothesis. Theoretical implications are discussed.
Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; MacPherson, Megan; Weber-Fox, Christine
2012-01-01
Using electrophysiology, we have examined two questions in relation to musical training – namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in sounds’ timbre. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 ERP component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We, therefore, conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians’ accuracy tended to suffer less from the change in sounds’ timbre, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. PMID:23301775
Communicating Earth Science Through Music: The Use of Environmental Sound in Science Outreach
NASA Astrophysics Data System (ADS)
Brenner, C.
2017-12-01
The need for increased public understanding and appreciation of Earth science has taken on growing importance over the last several decades. Human society faces critical environmental challenges, both near-term and future, in areas such as climate change, resource allocation, geohazard threat and the environmental degradation of ecosystems. Science outreach is an essential component to engaging both policymakers and the public in the importance of managing these challenges. However, despite considerable efforts on the part of scientists and outreach experts, many citizens feel that scientific research and methods are both difficult to understand and remote from their everyday experience. As perhaps the most accessible of all art forms, music can provide a pathway through which the public can connect to Earth processes. The Earth is not silent: environmental sound can be sampled and folded into musical compositions, either with or without the additional sounds of conventional or electronic instruments. These compositions can be used in conjunction with other forms of outreach (e.g., as soundtracks for documentary videos or museum installations), or simply stand alone as testament to the beauty of geology and nature. As proof of concept, this presentation will consist of a musical composition that includes sounds from various field recordings of wind, swamps, ice and water (including recordings from the inside of glaciers).
Spectral envelope sensitivity of musical instrument sounds.
Gunawan, David; Sen, D
2008-01-01
It is well known that the spectral envelope is a perceptually salient attribute in musical instrument timbre perception. While a number of studies have explored discrimination thresholds for changes to the spectral envelope, the question of how sensitivity varies as a function of center frequency and bandwidth for musical instruments has yet to be addressed. In this paper a two-alternative forced-choice experiment was conducted to observe perceptual sensitivity to modifications made on trumpet, clarinet and viola sounds. The experiment involved attenuating 14 frequency bands for each instrument in order to determine discrimination thresholds as a function of center frequency and bandwidth. The results indicate that perceptual sensitivity is governed by the first few harmonics and sensitivity does not improve when extending the bandwidth any higher. However, sensitivity was found to decrease if changes were made only to the higher frequencies and continued to decrease as the distorted bandwidth was widened. The results are analyzed and discussed with respect to two other spectral envelope discrimination studies in the literature as well as what is predicted from a psychoacoustic model.
Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients.
Gfeller, K; Christ, A; Knutson, J F; Witt, S; Murray, K T; Tyler, R S
2000-01-01
This paper describes the listening habits and musical enjoyment of postlingually deafened adults who use cochlear implants. Sixty-five implant recipients (35 females, 30 males) participated in a survey containing questions about musical background, prior involvement in music, and audiologic success with the implant in various listening circumstances. Responses were correlated with measures of cognition and speech recognition. Sixty-seven implant recipients completed daily diaries (7 consecutive days) in which they reported hours spent in specific music activities. Results indicate a wide range of success with music. In general, people enjoy music less postimplantation than prior to hearing loss. Musical enjoyment is influenced by the listening environment (e.g., a quiet room) and features of the music.
Kim, Gibbeum; Han, Woojae
2018-05-01
The present study estimated the sound pressure levels of various music genres at the volume steps that contemporary smartphones deliver, because these levels put the listener at potential risk for hearing loss. Using six different smartphones (Galaxy S6, Galaxy Note 3, iPhone 5S, iPhone 6, LG G2, and LG G3), the sound pressure levels for three genres of K-pop music (dance-pop, hip-hop, and pop-ballad) and a Billboard pop chart of assorted genres were measured through an earbud for the first risk volume that was at the risk sign proposed by the smartphones, as well as consecutive higher volumes using a sound level meter and artificial mastoid. The first risk volume step of the Galaxy S6 and the LG G2, among the six smartphones, had the significantly lowest (84.1 dBA) and highest output levels (92.4 dBA), respectively. As the volume step increased, so did the sound pressure levels. The iPhone 6 was loudest (113.1 dBA) at the maximum volume step. Of the music genres, dance-pop showed the highest output level (91.1 dBA) for all smartphones. Within the frequency range of 20~ 20,000 Hz, the sound pressure level peaked at 2000 Hz for all the smartphones. The results showed that the sound pressure levels of either the first volume step or the maximum volume step were not the same for the different smartphone models and genres of music, which means that the risk volume sign and its output levels should be unified across the devices for their users. In addition, the risk volume steps proposed by the latest smartphone models are high enough to cause noise-induced hearing loss if their users habitually listen to music at those levels.
Music as Environment: An Ecological and Biosemiotic Approach
Reybrouck, Mark
2014-01-01
This paper provides an attempt to conceive of music in terms of a sounding environment. Starting from a definition of music as a collection of vibrational events, it introduces the distinction between discrete-symbolic representations as against analog-continuous representations of the sounds. The former makes it possible to conceive of music in terms of a Humboldt system, the latter in terms of an experiential approach. Both approaches, further, are not opposed to each other, but are complementary to some extent. There is, however, a distinction to be drawn between the bottom-up approach to auditory processing of environmental sounds and music, which is continuous and proceeding in real time, as against the top-down approach, which is proceeding at a level of mental representation by applying discrete symbolic labels to vibrational events. The distinction is discussed against the background of phylogenetic and ontogenetic claims, with a major focus on the innate auditory capabilities of the fetus and neonate and the gradual evolution from mere sensory perception of sound to sense-making and musical meaning. The latter, finally, is elaborated on the basis of the operational concepts of affordance and functional tone, thus bringing together some older contributions from ecology and biosemiotics. PMID:25545707
NASA Astrophysics Data System (ADS)
Beauchamp, James W.
2002-11-01
Software has been developed which enables users to perform time-varying spectral analysis of individual musical tones or successions of them and to perform further processing of the data. The package, called sndan, is freely available in source code, uses EPS graphics for display, and is written in ansi c for ease of code modification and extension. Two analyzers, a fixed-filter-bank phase vocoder (''pvan'') and a frequency-tracking analyzer (''mqan'') constitute the analysis front end of the package. While pvan's output consists of continuous amplitudes and frequencies of harmonics, mqan produces disjoint ''tracks.'' However, another program extracts a fundamental frequency and separates harmonics from the tracks, resulting in a continuous harmonic output. ''monan'' is a program used to display harmonic data in a variety of formats, perform various spectral modifications, and perform additive resynthesis of the harmonic partials, including possible pitch-shifting and time-scaling. Sounds can also be synthesized according to a musical score using a companion synthesis language, Music 4C. Several other programs in the sndan suite can be used for specialized tasks, such as signal display and editing. Applications of the software include producing specialized sounds for music compositions or psychoacoustic experiments or as a basis for developing new synthesis algorithms.
Background music as a quasi clock in retrospective duration judgments.
Bailey, Nicole; Areni, Charles S
2006-04-01
The segmentation-change model of time perception proposes that individuals engaged in cognitive tasks during a given interval of time retrospectively estimate duration by recalling events that occurred during the interval and inferring each event's duration. Previous research suggests that individuals can recall the number of songs heard during an interval and infer the length of each song, exactly the conditions that foster estimates of duration based on the segmentation-change model. The results of a laboratory experiment indicated that subjects who solved word-search puzzles for 20 min. estimated the duration of the interval to be longer when 8 short songs (<3 min.) as opposed to 4 long songs (6+ min.) were played in the background, regardless of whether the musical format was Contemporary Dance or New Age. Assuming each song represented a distinct segment in memory, these results are consistent with the segmentation-change model. These results suggest that background music may not always reduce estimates of duration by drawing attention away from the passage of time. Instead, background music may actually expand the subjective length of an interval by creating accessible traces in memory, which are retrospectively used to infer duration.
Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S
2011-01-01
Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p < 0.001) time to navigate the maze over the three trials thereby showing an improvement with training. In both sound-stimulated groups, the total time taken to reach the target decreased significantly (p < 0.01) in comparison to the unstimulated control group, indicating the facilitation of spatial learning. However, this decline was more at 24 h than at later posthatch ages. When tested for memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p < 0.001) time to traverse the maze, suggesting a temporary impairment in their retention of the learnt task. In both sound-stimulated groups at 24 h, the plasma corticosterone levels were significantly decreased (p < 0.001) and increased thereafter at 72 h (p < 0.001) and 120 h which may contribute to the differential response in spatial learning. Thus, prenatal auditory stimulation with either species-specific or complex rhythmic music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.
Roy, Alexis T; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J
2015-01-01
Med-El cochlear implant (CI) patients are typically programmed with either the fine structure processing (FSP) or high-definition continuous interleaved sampling (HDCIS) strategy. FSP is the newer-generation strategy and aims to provide more direct encoding of fine structure information compared with HDCIS. Since fine structure information is extremely important in music listening, FSP may offer improvements in musical sound quality for CI users. Despite widespread clinical use of both strategies, few studies have assessed the possible benefits in music perception for the FSP strategy. The objective of this study is to measure the differences in musical sound quality discrimination between the FSP and HDCIS strategies. Musical sound quality discrimination was measured using a previously designed evaluation, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this evaluation, participants were required to detect sound quality differences between an unaltered real-world musical stimulus and versions of the stimulus in which various amount of bass (low) frequency information was removed via a high-pass filer. Eight CI users, currently using the FSP strategy, were enrolled in this study. In the first session, participants completed the CI-MUSHRA evaluation with their FSP strategy. Patients were then programmed with the clinical-default HDCIS strategy, which they used for 2 months to allow for acclimatization. After acclimatization, each participant returned for the second session, during which they were retested with HDCIS, and then switched back to their original FSP strategy and tested acutely. Sixteen normal-hearing (NH) controls completed a CI-MUSHRA evaluation for comparison, in which NH controls listened to music samples under normal acoustic conditions, without CI stimulation. Sensitivity to high-pass filtering more closely resembled that of NH controls when CI users were programmed with the clinical-default FSP strategy
Perception and Modeling of Affective Qualities of Musical Instrument Sounds across Pitch Registers.
McAdams, Stephen; Douglas, Chelsea; Vempala, Naresh N
2017-01-01
Composers often pick specific instruments to convey a given emotional tone in their music, partly due to their expressive possibilities, but also due to their timbres in specific registers and at given dynamic markings. Of interest to both music psychology and music informatics from a computational point of view is the relation between the acoustic properties that give rise to the timbre at a given pitch and the perceived emotional quality of the tone. Musician and nonmusician listeners were presented with 137 tones produced at a fixed dynamic marking (forte) playing tones at pitch class D# across each instrument's entire pitch range and with different playing techniques for standard orchestral instruments drawn from the brass, woodwind, string, and pitched percussion families. They rated each tone on six analogical-categorical scales in terms of emotional valence (positive/negative and pleasant/unpleasant), energy arousal (awake/tired), tension arousal (excited/calm), preference (like/dislike), and familiarity. Linear mixed models revealed interactive effects of musical training, instrument family, and pitch register, with non-linear relations between pitch register and several dependent variables. Twenty-three audio descriptors from the Timbre Toolbox were computed for each sound and analyzed in two ways: linear partial least squares regression (PLSR) and nonlinear artificial neural net modeling. These two analyses converged in terms of the importance of various spectral, temporal, and spectrotemporal audio descriptors in explaining the emotion ratings, but some differences also emerged. Different combinations of audio descriptors make major contributions to the three emotion dimensions, suggesting that they are carried by distinct acoustic properties. Valence is more positive with lower spectral slopes, a greater emergence of strong partials, and an amplitude envelope with a sharper attack and earlier decay. Higher tension arousal is carried by brighter sounds
Perception and Modeling of Affective Qualities of Musical Instrument Sounds across Pitch Registers
McAdams, Stephen; Douglas, Chelsea; Vempala, Naresh N.
2017-01-01
Composers often pick specific instruments to convey a given emotional tone in their music, partly due to their expressive possibilities, but also due to their timbres in specific registers and at given dynamic markings. Of interest to both music psychology and music informatics from a computational point of view is the relation between the acoustic properties that give rise to the timbre at a given pitch and the perceived emotional quality of the tone. Musician and nonmusician listeners were presented with 137 tones produced at a fixed dynamic marking (forte) playing tones at pitch class D# across each instrument's entire pitch range and with different playing techniques for standard orchestral instruments drawn from the brass, woodwind, string, and pitched percussion families. They rated each tone on six analogical-categorical scales in terms of emotional valence (positive/negative and pleasant/unpleasant), energy arousal (awake/tired), tension arousal (excited/calm), preference (like/dislike), and familiarity. Linear mixed models revealed interactive effects of musical training, instrument family, and pitch register, with non-linear relations between pitch register and several dependent variables. Twenty-three audio descriptors from the Timbre Toolbox were computed for each sound and analyzed in two ways: linear partial least squares regression (PLSR) and nonlinear artificial neural net modeling. These two analyses converged in terms of the importance of various spectral, temporal, and spectrotemporal audio descriptors in explaining the emotion ratings, but some differences also emerged. Different combinations of audio descriptors make major contributions to the three emotion dimensions, suggesting that they are carried by distinct acoustic properties. Valence is more positive with lower spectral slopes, a greater emergence of strong partials, and an amplitude envelope with a sharper attack and earlier decay. Higher tension arousal is carried by brighter sounds
Software-Based Scoring and Sound Design: An Introductory Guide for Music Technology Instruction
ERIC Educational Resources Information Center
Walzer, Daniel A.
2016-01-01
This article explores the creative function of virtual instruments, sequencers, loops, and software-based synthesizers to introduce basic scoring and sound design concepts for visual media in an introductory music technology course. Using digital audio workstations with user-focused and configurable options, novice composers can hone a broad range…
ERIC Educational Resources Information Center
Elsas, Diana, Ed.; And Others
Organizations listed here with descriptive information include film music clubs and music guilds and associations. These are followed by a representative list of schools offering film music and/or film sound courses. Sources are listed for soundtrack recordings, sound effects/production music, films on film music, and oral history programs. The…
Musical Understanding, Musical Works, and Emotional Expression: Implications for Education
ERIC Educational Resources Information Center
Elliott, David J.
2005-01-01
What do musicians, critics, and listeners mean when they use emotion-words to describe a piece of instrumental music? How can "pure" musical sounds "express" emotions such as joyfulness, sadness, anguish, optimism, and anger? Sounds are not living organisms; sounds cannot feel emotions. Yet many people around the world believe they hear emotions…
Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk
2017-05-01
Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Küssner, Mats B; de Groot, Annette M B; Hofman, Winni F; Hillen, Marij A
2016-01-01
As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is-partly due to a lack of theory-driven research-no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on foreign vocabulary learning. Based on Eysenck's theory of personality we predicted that individuals with a high level of cortical arousal should perform worse when learning with background music compared to silence, whereas individuals with a low level of cortical arousal should be unaffected by background music or benefit from it. Participants were tested in a paired-associate learning paradigm consisting of three immediate word recall tasks, as well as a delayed recall task one week later. Baseline cortical arousal assessed with spontaneous EEG measurement in silence prior to the learning rounds was used for the analyses. Results revealed no interaction between cortical arousal and the learning condition (background music vs. silence). Instead, we found an unexpected main effect of cortical arousal in the beta band on recall, indicating that individuals with high beta power learned more vocabulary than those with low beta power. To substantiate this finding we conducted an exact replication of the experiment. Whereas the main effect of cortical arousal was only present in a subsample of participants, a beneficial main effect of background music appeared. A combined analysis of both experiments suggests that beta power predicts the performance in the word recall task, but that there is no effect of background music on foreign vocabulary learning. In light of these findings, we discuss whether searching for effects of background music on foreign vocabulary learning, independent of factors such as inter-individual differences and task complexity, might be a red herring. Importantly, our findings emphasize the need for sufficiently powered research designs and exact replications
de Groot, Annette M. B.; Hofman, Winni F.; Hillen, Marij A.
2016-01-01
As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is—partly due to a lack of theory-driven research—no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on foreign vocabulary learning. Based on Eysenck’s theory of personality we predicted that individuals with a high level of cortical arousal should perform worse when learning with background music compared to silence, whereas individuals with a low level of cortical arousal should be unaffected by background music or benefit from it. Participants were tested in a paired-associate learning paradigm consisting of three immediate word recall tasks, as well as a delayed recall task one week later. Baseline cortical arousal assessed with spontaneous EEG measurement in silence prior to the learning rounds was used for the analyses. Results revealed no interaction between cortical arousal and the learning condition (background music vs. silence). Instead, we found an unexpected main effect of cortical arousal in the beta band on recall, indicating that individuals with high beta power learned more vocabulary than those with low beta power. To substantiate this finding we conducted an exact replication of the experiment. Whereas the main effect of cortical arousal was only present in a subsample of participants, a beneficial main effect of background music appeared. A combined analysis of both experiments suggests that beta power predicts the performance in the word recall task, but that there is no effect of background music on foreign vocabulary learning. In light of these findings, we discuss whether searching for effects of background music on foreign vocabulary learning, independent of factors such as inter-individual differences and task complexity, might be a red herring. Importantly, our findings emphasize the need for sufficiently powered research designs and exact
The Effect of Background Music in Shark Documentaries on Viewers' Perceptions of Sharks
Hastings, Philip A.; Gneezy, Ayelet
2016-01-01
Despite the ongoing need for shark conservation and management, prevailing negative sentiments marginalize these animals and legitimize permissive exploitation. These negative attitudes arise from an instinctive, yet exaggerated fear, which is validated and reinforced by disproportionate and sensationalistic news coverage of shark ‘attacks’ and by highlighting shark-on-human violence in popular movies and documentaries. In this study, we investigate another subtler, yet powerful factor that contributes to this fear: the ominous background music that often accompanies shark footage in documentaries. Using three experiments, we show that participants rated sharks more negatively and less positively after viewing a 60-second video clip of swimming sharks set to ominous background music, compared to participants who watched the same video clip set to uplifting background music, or silence. This finding was not an artifact of soundtrack alone because attitudes toward sharks did not differ among participants assigned to audio-only control treatments. This is the first study to demonstrate empirically that the connotative attributes of background music accompanying shark footage affect viewers’ attitudes toward sharks. Given that nature documentaries are often regarded as objective and authoritative sources of information, it is critical that documentary filmmakers and viewers are aware of how the soundtrack can affect the interpretation of the educational content. PMID:27487003
The Effect of Background Music in Shark Documentaries on Viewers' Perceptions of Sharks.
Nosal, Andrew P; Keenan, Elizabeth A; Hastings, Philip A; Gneezy, Ayelet
2016-01-01
Despite the ongoing need for shark conservation and management, prevailing negative sentiments marginalize these animals and legitimize permissive exploitation. These negative attitudes arise from an instinctive, yet exaggerated fear, which is validated and reinforced by disproportionate and sensationalistic news coverage of shark 'attacks' and by highlighting shark-on-human violence in popular movies and documentaries. In this study, we investigate another subtler, yet powerful factor that contributes to this fear: the ominous background music that often accompanies shark footage in documentaries. Using three experiments, we show that participants rated sharks more negatively and less positively after viewing a 60-second video clip of swimming sharks set to ominous background music, compared to participants who watched the same video clip set to uplifting background music, or silence. This finding was not an artifact of soundtrack alone because attitudes toward sharks did not differ among participants assigned to audio-only control treatments. This is the first study to demonstrate empirically that the connotative attributes of background music accompanying shark footage affect viewers' attitudes toward sharks. Given that nature documentaries are often regarded as objective and authoritative sources of information, it is critical that documentary filmmakers and viewers are aware of how the soundtrack can affect the interpretation of the educational content.
An Investigation of the Role of Background Music in IVWs for Learning
ERIC Educational Resources Information Center
Richards, Debbie; Fassbender, Eric; Bilgin, Ayse; Thompson, William Forde
2008-01-01
Empirical evidence is needed to corroborate the intuitions of gamers and game developers in understanding the benefits of Immersive Virtual Worlds (IVWs) as a learning environment and the role that music plays within these environments. We report an investigation to determine if background music of the genre typically found in computer-based…
Genomics studies on musical aptitude, music perception, and practice.
Järvelä, Irma
2018-03-23
When searching for genetic markers inherited together with musical aptitude, genes affecting inner ear development and brain function were identified. The alpha-synuclein gene (SNCA), located in the most significant linkage region of musical aptitude, was overexpressed when listening and performing music. The GATA-binding protein 2 gene (GATA2) was located in the best associated region of musical aptitude and regulates SNCA in dopaminergic neurons, thus linking DNA- and RNA-based studies of music-related traits together. In addition to SNCA, several other genes were linked to dopamine metabolism. Mutations in SNCA predispose to Lewy-body dementia and cause Parkinson disease in humans and affect song production in songbirds. Several other birdsong genes were found in transcriptome analysis, suggesting a common evolutionary background of sound perception and production in humans and songbirds. Regions of positive selection with musical aptitude contained genes affecting auditory perception, cognitive performance, memory, human language development, and song perception and production of songbirds. The data support the role of dopaminergic pathway and their link to the reward mechanism as a molecular determinant in positive selection of music. Integration of gene-level data from the literature across multiple species prioritized activity-dependent immediate early genes as candidate genes in musical aptitude and listening to and performing music. © 2018 New York Academy of Sciences.
2014-01-01
Background Whether listening to background music enhances verbal learning performance is still a matter of dispute. In this study we investigated the influence of vocal and instrumental background music on verbal learning. Methods 226 subjects were randomly assigned to one of five groups (one control group and 4 experimental groups). All participants were exposed to a verbal learning task. One group served as control group while the 4 further groups served as experimental groups. The control group learned without background music while the 4 experimental groups were exposed to vocal or instrumental musical pieces during learning with different subjective intensity and valence. Thus, we employed 4 music listening conditions (vocal music with high intensity: VOC_HIGH, vocal music with low intensity: VOC_LOW, instrumental music with high intensity: INST_HIGH, instrumental music with low intensity: INST_LOW) and one control condition (CONT) during which the subjects learned the word lists. Since it turned out that the high and low intensity groups did not differ in terms of the rated intensity during the main experiment these groups were lumped together. Thus, we worked with 3 groups: one control group and two groups, which were exposed to background music (vocal and instrumental) during verbal learning. As dependent variable, the number of learned words was used. Here we measured immediate recall during five learning sessions (recall 1 – recall 5) and delayed recall for 15 minutes (recall 6) and 14 days (recall 7) after the last learning session. Results Verbal learning improved during the first 5 recall sessions without any strong difference between the control and experimental groups. Also the delayed recalls were similar for the three groups. There was only a trend for attenuated verbal learning for the group passively listened to vocals. This learning attenuation diminished during the following learning sessions. Conclusions The exposure to vocal or
ERIC Educational Resources Information Center
Dillon, Steve; Adkins, Barbara; Brown, Andrew; Hirche, Kathy
2009-01-01
In this article, we examine the affordances of the concept of "network jamming" as a means of facilitating social and cultural interaction, that provides a basis for unified communities that use sound and visual media as their key expressive medium. This article focuses upon the development of a means of measuring social and musical benefit…
Music, madness and the body: symptom and cure.
MacKinnon, Dolly
2006-03-01
Building on Sander L. Gilman's exemplary work on images of madness and the body, this article examines images of music, madness and the body by discussing the persistent cultural beliefs stemming from Classical Antiquity that underpin music as medicinal. These images reflect the body engaged in therapeutic musical activities, as well as musical sounds forming part of the evidence of the mental diagnostic state of a patient in case records. The historiography of music as medicinal has been overlooked in the history of psychiatry. This article provides a brief background to the cultural beliefs that underlie examples of music as both symptom and cure in 19th- and 20th-century asylum records in Australia, Britain, Europe and North America.
Attention Drainage Effect: How Background Music Effects Concentration in Taiwanese College Students
ERIC Educational Resources Information Center
Chou, Peter Tze-Ming
2010-01-01
The purpose of this study was to see whether different types of background music affect the performance of a reading comprehension task in Taiwanese college students. There are two major research questions in this study. First, this study tries to find out whether listening to music affect the learner's concentration when they are doing a task…
Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals
Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro
2012-01-01
Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497
Katagiri, June
2009-01-01
The purpose of this study was to examine the effect of background music and song texts to teach emotional understanding to children with autism. Participants were 12 students (mean age 11.5 years) with a primary diagnosis of autism who were attending schools in Japan. Each participant was taught four emotions to decode and encode: happiness, sadness, anger, and fear by the counterbalanced treatment-order. The treatment consisted of the four conditions: (a) no contact control (NCC)--no purposeful teaching of the selected emotion, (b) contact control (CC)--teaching the selected emotion using verbal instructions alone, (c) background music (BM)--teaching the selected emotion by verbal instructions with background music representing the emotion, and singing songs (SS)--teaching the selected emotion by singing specially composed songs about the emotion. Participants were given a pretest and a posttest and received 8 individual sessions between these tests. The results indicated that all participants improved significantly in their understanding of the four selected emotions. Background music was significantly more effective than the other three conditions in improving participants' emotional understanding. The findings suggest that background music can be an effective tool to increase emotional understanding in children with autism, which is crucial to their social interactions.
Social Theory, and Music and Music Education as Praxis
ERIC Educational Resources Information Center
Regelski, Thomas A.
2004-01-01
The idea of praxis, and thus the idea of music as praxis, is not widely known in the fields of music and music education. Nonetheless, musicians and music teachers typically take for granted as sacrosanct the noble sounding, metaphysical, even spiritual profundity of music hypothesized by mainstream aesthetic philosophies. Thus accounts of music…
Using Background Music To Enhance Memory and Improve Learning.
ERIC Educational Resources Information Center
Anderson, Scheree; Henke, Jeanette; McLaughlin, Maureen; Ripp, Mary; Tuffs, Patricia
This report describes a program to enhance spelling word retention through the use of background music. The targeted population consisted of elementary students in three middle class communities located in the southwestern suburbs of Chicago. The problems for poor spelling retention were documented through data revealing the number of students…
The Sound of the Microwave Background
NASA Astrophysics Data System (ADS)
Whittle, M.
2004-05-01
One of the most impressive developments in modern cosmology has been the measurement and analysis of the tiny fluctuations seen in the cosmic microwave background (CMB) radiation. When discussing these fluctuations, cosmologists frequently refer to their acoustic nature -- sound waves moving through the hot gas appear as peaks and troughs when they cross the surface of last scattering. As is now well known, recent observations quantify the amplitudes of these waves over several octaves, revealing a fundamental tone with several harmonics, whose relative strengths and pitches reveal important cosmological parameters, including global curvature. Not surprisingly, these results have wonderful pedagogical value in educating and inspiring both students and the general public. To further enhance this educational experience, I have attempted what might seem rather obvious, namely converting the CMB power spectrum into an audible sound. By raising the pitch some 50 octaves so that the fundamental falls at 200 Hz (matching its harmonic ``l" value), we hear the resulting sound as a loud hissing roar. Matching the progress in observational results has been an equally impressive development of the theoretical treatment of CMB fluctuations. Using available computer simulations (e.g. CMBFAST) it is possible to recreate the subtley different sounds generated by different kinds of universe (e.g. different curvature or baryon content). Pushing further, one can generate the ``true" sound, characterized by P(k), rather than the ``observed" sound, characterized by C(l). From P(k), we learn that the fundamental and harmonics are offset, yielding a chord somewhere between a major and minor third. A sequence of models also allows one to follow the growth of sound during the first megayear: a descending scream, changing into a deepening roar, with subsequent growing hiss; matching the increase in wavelength caused by universal expansion, followed by the post recombination flow of gas into
NASA Astrophysics Data System (ADS)
Goad, Pamela Joy
The fusion of musical voices is an important aspect of musical blend, or the mixing of individual sounds. Yet, little research has been done to explicitly determine the factors involved in fusion. In this study, the similarity of timbre and modulation were examined for their contribution to the fusion of sounds. It is hypothesized that similar timbres will fuse better than dissimilar timbres, and, voices with the same kind of modulation will fuse better than voices of different modulations. A perceptually-based measure, known as sharpness was investigated as a measure of timbre. The advantages of using sharpness are that it is based on hearing sensitivities and masking phenomena of inner ear processing. Five musical instrument families were digitally recorded in performances across a typical playing range at two extreme dynamic levels. Analyses reveal that sharpness is capable of uncovering subtle changes in timbre including those found in musical dynamics, instrument design, and performer-specific variations. While these analyses alone are insufficient to address fusion, preliminary calculations of timbral combinations indicate that sharpness has the potential to predict the fusion of sounds used in musical composition. Three experiments investigated the effects of modulation on the fusion of a harmonic major sixth interval. In the first experiment using frequency modulation, stimuli varied in deviation about a mean fundamental frequency and relative modulation phase between the two tones. Results showed smaller frequency deviations promoted fusion and relative phase differences had a minimal effect. In a second experiment using amplitude modulation, stimuli varied in deviation about a mean amplitude level and relative phase of modulation. Results showed smaller amplitude deviations promoted better fusion, but unlike frequency modulation, relative phase differences were also important. In a third experiment, frequency modulation, amplitude modulation and mixed
Concert halls with strong and lateral sound increase the emotional impact of orchestra music.
Pätynen, Jukka; Lokki, Tapio
2016-03-01
An audience's auditory experience during a thrilling and emotive live symphony concert is an intertwined combination of the music and the acoustic response of the concert hall. Music in itself is known to elicit emotional pleasure, and at best, listening to music may evoke concrete psychophysiological responses. Certain concert halls have gained a reputation for superior acoustics, but despite the continuous research by a multitude of objective and subjective studies on room acoustics, the fundamental reason for the appreciation of some concert halls remains elusive. This study demonstrates that room acoustic effects contribute to the overall emotional experience of a musical performance. In two listening tests, the subjects listen to identical orchestra performances rendered in the acoustics of several concert halls. The emotional excitation during listening is measured in the first experiment, and in the second test, the subjects assess the experienced subjective impact by paired comparisons. The results showed that the sound of some traditional rectangular halls provides greater psychophysiological responses and subjective impact. These findings provide a quintessential explanation for these halls' success and reveal the overall significance of room acoustics for emotional experience in music performance.
Raitanen, Jani; Husu, Pauliina; Kujala, Urho M.; Luoto, Riitta M.
2018-01-01
Objectives The purpose of this study was to examine whether mothers’ musical background has an effect on their own and their children’s sedentary behavior (SB) and physical activity (PA). The aim was also to assess children’s and their mothers’ exercise adherence when using movement-to-music video program. Design Sub-group analysis of an intervention group in a randomized controlled trial (ISRCTN33885819). Method Seventy-one mother-child-pairs were divided into two categories based on mothers’ musical background. Each pair performed 8 weeks exercise intervention using movement-to-music video program. SB and PA were assessed objectively by accelerometer, and exercise activity, fidelity, and enjoyment were assessed via exercise diaries and questionnaires. Logistic regression model was used to analyze associations in the main outcomes between the groups. Results Those children whose mothers had musical background (MB) had greater probability to increase their light PA during the intervention, but not moderate-to-vigorous PA compared to those children whose mothers did not have musical background (NMB). SB increased in both groups. Mothers in the NMB group had greater probability to increase their light and moderate-to-vigorous PA and decrease their SB than mothers in the MB group. However, exercise adherence decreased considerably in all groups. Completeness, fidelity, and enjoyment were higher among the NMB group compared to the MB group. Conclusions The present results showed that mothers without musical background were more interested in movement-to-music exercises, as well as their children. For further studies it would be important to evaluate an effect of children’s own music-based activities on their SB and PA. PMID:29668726
"Sounds of Intent", Phase 2: Gauging the Music Development of Children with Complex Needs
ERIC Educational Resources Information Center
Ockelford, A.; Welch, G.; Jewell-Gore, L.; Cheng, E.; Vogiatzoglou, A.; Himonides, E.
2011-01-01
This article reports the latest phase of research in the "Sounds of intent" project, which is seeking, as a long-term goal, to map musical development in children and young people with severe, or profound and multiple learning difficulties (SLD or PMLD). Previous exploratory work had resulted in a framework of six putative…
Measuring Neural Entrainment to Beat and Meter in Infants: Effects of Music Background.
Cirelli, Laura K; Spinelli, Christina; Nozaradan, Sylvie; Trainor, Laurel J
2016-01-01
Caregivers often engage in musical interactions with their infants. For example, parents across cultures sing lullabies and playsongs to their infants from birth. Behavioral studies indicate that infants not only extract beat information, but also group these beats into metrical hierarchies by as early as 6 months of age. However, it is not known how this is accomplished in the infant brain. An EEG frequency-tagging approach has been used successfully with adults to measure neural entrainment to auditory rhythms. The current study is the first to use this technique with infants in order to investigate how infants' brains encode rhythms. Furthermore, we examine how infant and parent music background is associated with individual differences in rhythm encoding. In Experiment 1, EEG was recorded while 7-month-old infants listened to an ambiguous rhythmic pattern that could be perceived to be in two different meters. In Experiment 2, EEG was recorded while 15-month-old infants listened to a rhythmic pattern with an unambiguous meter. In both age groups, information about music background (parent music training, infant music classes, hours of music listening) was collected. Both age groups showed clear EEG responses frequency-locked to the rhythms, at frequencies corresponding to both beat and meter. For the younger infants (Experiment 1), the amplitudes at duple meter frequencies were selectively enhanced for infants enrolled in music classes compared to those who had not engaged in such classes. For the older infants (Experiment 2), amplitudes at beat and meter frequencies were larger for infants with musically-trained compared to musically-untrained parents. These results suggest that the frequency-tagging method is sensitive to individual differences in beat and meter processing in infancy and could be used to track developmental changes.
Measuring Neural Entrainment to Beat and Meter in Infants: Effects of Music Background
Cirelli, Laura K.; Spinelli, Christina; Nozaradan, Sylvie; Trainor, Laurel J.
2016-01-01
Caregivers often engage in musical interactions with their infants. For example, parents across cultures sing lullabies and playsongs to their infants from birth. Behavioral studies indicate that infants not only extract beat information, but also group these beats into metrical hierarchies by as early as 6 months of age. However, it is not known how this is accomplished in the infant brain. An EEG frequency-tagging approach has been used successfully with adults to measure neural entrainment to auditory rhythms. The current study is the first to use this technique with infants in order to investigate how infants' brains encode rhythms. Furthermore, we examine how infant and parent music background is associated with individual differences in rhythm encoding. In Experiment 1, EEG was recorded while 7-month-old infants listened to an ambiguous rhythmic pattern that could be perceived to be in two different meters. In Experiment 2, EEG was recorded while 15-month-old infants listened to a rhythmic pattern with an unambiguous meter. In both age groups, information about music background (parent music training, infant music classes, hours of music listening) was collected. Both age groups showed clear EEG responses frequency-locked to the rhythms, at frequencies corresponding to both beat and meter. For the younger infants (Experiment 1), the amplitudes at duple meter frequencies were selectively enhanced for infants enrolled in music classes compared to those who had not engaged in such classes. For the older infants (Experiment 2), amplitudes at beat and meter frequencies were larger for infants with musically-trained compared to musically-untrained parents. These results suggest that the frequency-tagging method is sensitive to individual differences in beat and meter processing in infancy and could be used to track developmental changes. PMID:27252619
The effect of background music on the taste of wine.
North, Adrian C
2012-08-01
Research concerning cross-modal influences on perception has neglected auditory influences on perceptions of non-auditory objects, although a small number of studies indicate that auditory stimuli can influence perceptions of the freshness of foodstuffs. Consistent with this, the results reported here indicate that independent groups' ratings of the taste of the wine reflected the emotional connotations of the background music played while they drank it. These results indicate that the symbolic function of auditory stimuli (in this case music) may influence perception in other modalities (in this case gustation); and are discussed in terms of possible future research that might investigate those aspects of music that induce such effects in a particular manner, and how such effects might be influenced by participants' pre-existing knowledge and expertise with regard to the target object in question. ©2011 The British Psychological Society.
The effect of background music on the perception of personality and demographics.
Lastinger, Daniel L
2011-01-01
This study seeks to discover stereotypes people may have about different music genres and if these stereotypes are projected onto an individual. Also, the study investigates if music therapy students are more or less biased than non-music majors in this regard. Subjects (N=388) were comprised of student members of the American Music Therapy Association (N=182) and students from a college in the southeastern United States who were not music majors (N=206). Subjects were asked to listen to a recording and complete a short survey. Subjects assigned to the control condition heard only a person reading a script. Subjects assigned to one of the four experimental conditions heard the same recording mixed with background music and ambient crowd noise, intended to simulate a live performance. Subjects were asked to rate the person in the recording on personality descriptors and predict demographic information in the survey. Many of the survey responses were significantly affected by the genre of music. For example, it was shown that when in the presence of rap or country music, all subjects rated the personality of the person in the recording significantly more negative than when in the presence of classical, jazz, or no music. There were no significant differences between the groups for any variable or condition when comparing survey responses between college students and AMTA student members.
NASA Astrophysics Data System (ADS)
Probst, Ron N.; Rypka, Dann
2005-09-01
Pre-engineered and manufactured sound isolation rooms were developed to ensure guaranteed sound isolation while offering the unique ability to be disassembled and relocated without loss of acoustic performance. Case studies of pre-engineered sound isolation rooms used for music practice and various radio broadcast purposes are highlighted. Three prominent universities wrestle with the challenges of growth and expansion while responding to the specialized acoustic requirements of these spaces. Reduced state funding for universities requires close examination of all options while ensuring sound isolation requirements are achieved. Changing curriculum, renovation, and new construction make pre-engineered and manufactured rooms with guaranteed acoustical performance good investments now and for the future. An added benefit is the optional integration of active acoustics to provide simulations of other spaces or venues along with the benefit of sound isolation.
Data sonification and sound visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wiebel, E.
1999-07-01
Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.
Practiced musical style shapes auditory skills.
Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari
2012-04-01
Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music. © 2012 New York Academy of Sciences.
The musical brain: brain waves reveal the neurophysiological basis of musicality in human subjects.
Tervaniemi, M; Ilvonen, T; Karma, K; Alho, K; Näätänen, R
1997-04-18
To reveal neurophysiological prerequisites of musicality, auditory event-related potentials (ERPs) were recorded from musical and non-musical subjects, musicality being here defined as the ability to temporally structure auditory information. Instructed to read a book and to ignore sounds, subjects were presented with a repetitive sound pattern with occasional changes in its temporal structure. The mismatch negativity (MMN) component of ERPs, indexing the cortical preattentive detection of change in these stimulus patterns, was larger in amplitude in musical than non-musical subjects. This amplitude enhancement, indicating more accurate sensory memory function in musical subjects, suggests that even the cognitive component of musicality, traditionally regarded as depending on attention-related brain processes, in fact, is based on neural mechanisms present already at the preattentive level.
NASA Astrophysics Data System (ADS)
Cook, Perry R.
This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).
ERIC Educational Resources Information Center
Beegle, Amy C.
2012-01-01
Access to world music resources such as videos and sound recordings have increased with the advent of YouTube and the efforts of music educators working closely with ethnomusicologists to provide more detailed visual and audio information about various musical practices. This column discusses some world music resources available for music…
Lehmann, Janina A M; Seufert, Tina
2017-01-01
This study investigates how background music influences learning with respect to three different theoretical approaches. Both the Mozart effect as well as the arousal-mood-hypothesis indicate that background music can potentially benefit learning outcomes. While the Mozart effect assumes a direct influence of background music on cognitive abilities, the arousal-mood-hypothesis assumes a mediation effect over arousal and mood. However, the seductive detail effect indicates that seductive details such as background music worsen learning. Moreover, as working memory capacity has a crucial influence on learning with seductive details, we also included the learner's working memory capacity as a factor in our study. We tested 81 college students using a between-subject design with half of the sample listening to two pop songs while learning a visual text and the other half learning in silence. We included working memory capacity in the design as a continuous organism variable. Arousal and mood scores before and after learning were collected as potential mediating variables. To measure learning outcomes we tested recall and comprehension. We did not find a mediation effect between background music and arousal or mood on learning outcomes. In addition, for recall performance there were no main effects of background music or working memory capacity, nor an interaction effect of these factors. However, when considering comprehension we did find an interaction between background music and working memory capacity: the higher the learners' working memory capacity, the better they learned with background music. This is in line with the seductive detail assumption.
Lehmann, Janina A. M.; Seufert, Tina
2017-01-01
This study investigates how background music influences learning with respect to three different theoretical approaches. Both the Mozart effect as well as the arousal-mood-hypothesis indicate that background music can potentially benefit learning outcomes. While the Mozart effect assumes a direct influence of background music on cognitive abilities, the arousal-mood-hypothesis assumes a mediation effect over arousal and mood. However, the seductive detail effect indicates that seductive details such as background music worsen learning. Moreover, as working memory capacity has a crucial influence on learning with seductive details, we also included the learner’s working memory capacity as a factor in our study. We tested 81 college students using a between-subject design with half of the sample listening to two pop songs while learning a visual text and the other half learning in silence. We included working memory capacity in the design as a continuous organism variable. Arousal and mood scores before and after learning were collected as potential mediating variables. To measure learning outcomes we tested recall and comprehension. We did not find a mediation effect between background music and arousal or mood on learning outcomes. In addition, for recall performance there were no main effects of background music or working memory capacity, nor an interaction effect of these factors. However, when considering comprehension we did find an interaction between background music and working memory capacity: the higher the learners’ working memory capacity, the better they learned with background music. This is in line with the seductive detail assumption. PMID:29163283
ERIC Educational Resources Information Center
de Groot, Annette M. B.; Smedinga, Hilde E.
2014-01-01
Participants learned foreign vocabulary by means of the paired-associates learning procedure in three conditions: (a) in silence, (b) with vocal music with lyrics in a familiar language playing in the background, or (c) with vocal music with lyrics in an unfamiliar language playing in the background. The vocabulary to learn varied in concreteness…
Hearing the music in the spectrum of hydrogen
NASA Astrophysics Data System (ADS)
LoPresto, Michael C.
2016-03-01
Throughout a general education course on sound and light aimed at music and art students, analogies between subjective perceptions of objective properties of sound and light waves are a recurring theme. Demonstrating that the pitch and loudness of musical sounds are related to the frequency and intensity of a sound wave is simple and students are easily able to draw the analogies with the color and brightness of light. When considering an entire spectrum, the presence of multiple frequencies and wavelengths of different intensities is perceived by the ear as sound quality, or musical timbre, while the perception of the eye is the tone or hue of a color. What follows is a description of a demonstration that draws the analogy between musical sound quality and the tone or hue of light in which the emission spectrum of hydrogen is considered and actually played as a musical chord.
Background music as a risk factor for distraction among young-novice drivers.
Brodsky, Warren; Slor, Zack
2013-10-01
There are countless beliefs about the power of music during driving. The last thing one would think about is: how safe is it to listen or sing to music? Unfortunately, collisions linked to music devices have been known for some time; adjusting the radio controls, swapping tape-cassettes and compact-discs, or searching through MP3 files, are all forms of distraction that can result in a near-crash or crash. While the decrement of vehicular performance can also occur from capacity interference to central attention, whether or not music listening is a contributing factor to distraction is relatively unknown. The current study explored the effects of driver-preferred music on driver behavior. 85 young-novice drivers completed six trips in an instrumented Learners Vehicle. The study found that all participants committed at-least 3 driver deficiencies; 27 needed a verbal warning/command and 17 required a steering or braking intervention to prevent an accident. While there were elevated positive moods and enjoyment for trips with driver-preferred music, this background also produced the most frequent severe driver miscalculations and inaccuracies, violations, and aggressive driving. However, trips with music structurally designed to generate moderate levels of perceptual complexity, improved driver behavior and increased driver safety. The study is the first within-subjects on-road high-dose double-exposure clinical-trial investigation of musical stimuli on driver behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.
Using therapeutic sound with progressive audiologic tinnitus management.
Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A
2008-09-01
Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).
Background noise exerts diverse effects on the cortical encoding of foreground sounds.
Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E
2017-08-01
In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may
The Impact of Single-Sided Deafness upon Music Appreciation.
Meehan, Sarah; Hough, Elizabeth A; Crundwell, Gemma; Knappett, Rachel; Smith, Mark; Baguley, David M
2017-05-01
Many of the world's population have hearing loss in one ear; current statistics indicate that up to 10% of the population may be affected. Although the detrimental impact of bilateral hearing loss, hearing aids, and cochlear implants upon music appreciation is well recognized, studies on the influence of single-sided deafness (SSD) are sparse. We sought to investigate whether a single-sided hearing loss can cause problems with music appreciation, despite normal hearing in the other ear. A tailored questionnaire was used to investigate music appreciation for those with SSD. We performed a retrospective survey of a population of 51 adults from a University Hospital Audiology Department SSD clinic. SSD was predominantly adult-onset sensorineural hearing loss, caused by a variety of etiologies. Analyses were performed to assess for statistical differences between groups, for example, comparing music appreciation before and after the onset of SSD, or before and after receiving hearing aid(s). Results demonstrated that a proportion of the population experienced significant changes to the way music sounded; music was found to sound more unnatural (75%), unpleasant (71%), and indistinct (81%) than before hearing loss. Music was reported to lack the perceptual qualities of stereo sound, and to be confounded by distortion effects and tinnitus. Such changes manifested in an altered music appreciation, with 44% of participants listening to music less often, 71% of participants enjoying music less, and 46% of participants reporting that music played a lesser role in their lives than pre-SSD. Negative effects surrounding social occasions with music were revealed, along with a strong preference for limiting background music. Hearing aids were not found to significantly ameliorate these effects. Results could be explained in part through considerations of psychoacoustic changes intrinsic to an asymmetric hearing loss and impaired auditory scene analysis. Given the prevalence of
Sensorimotor adaptation is influenced by background music.
Bock, Otmar
2010-06-01
It is well established that listening to music can modify subjects' cognitive performance. The present study evaluates whether this so-called Mozart Effect extends beyond cognitive tasks and includes sensorimotor adaptation. Three subject groups listened to musical pieces that in the author's judgment were serene, neutral, or sad, respectively. This judgment was confirmed by the subjects' introspective reports. While listening to music, subjects engaged in a pointing task that required them to adapt to rotated visual feedback. All three groups adapted successfully, but the speed and magnitude of adaptive improvement was more pronounced with serene music than with the other two music types. In contrast, aftereffects upon restoration of normal feedback were independent of music type. These findings support the existence of a "Mozart effect" for strategic movement control, but not for adaptive recalibration. Possibly, listening to music modifies neural activity in an intertwined cognitive-emotional network.
The Effect of Background Music on Bullying: A Pilot Study
ERIC Educational Resources Information Center
Ziv, Naomi; Dolev, Einat
2013-01-01
School bullying is a source of growing concern. A number of intervention programs emphasize the importance of a positive school climate in preventing bullying behavior. The aim of the presented pilot study was to examine whether calming background music, through its effect on arousal and mood, could create a pleasant atmosphere and reduce bullying…
Global music approach to persons with dementia: evidence and practice.
Raglio, Alfredo; Filippi, Stefania; Bellandi, Daniele; Stramba-Badiale, Marco
2014-01-01
Music is an important resource for achieving psychological, cognitive, and social goals in the field of dementia. This paper describes the different types of evidence-based music interventions that can be found in literature and proposes a structured intervention model (global music approach to persons with dementia, GMA-D). The literature concerning music and dementia was considered and analyzed. The reported studies included more recent studies and/or studies with relevant scientific characteristics. From this background, a global music approach was proposed using music and sound-music elements according to the needs, clinical characteristics, and therapeutic-rehabilitation goals that emerge in the care of persons with dementia. From the literature analysis the following evidence-based interventions emerged: active music therapy (psychological and rehabilitative approaches), active music therapy with family caregivers and persons with dementia, music-based interventions, caregivers singing, individualized listening to music, and background music. Characteristics of each type of intervention are described and discussed. Standardizing the operational methods and evaluation of the single activities and a joint practice can contribute to achieve the validation of the application model. The proposed model can be considered a low-cost nonpharmacological intervention and a therapeutic-rehabilitation method for the reduction of behavioral disturbances, for stimulation of cognitive functions, and for increasing the overall quality of life of persons with dementia.
Kettering, Tracy L; Fisher, Wayne W; Kelley, Michael E; LaRue, Robert H
2018-06-06
We examined the extent to which different sounds functioned as motivating operations (MO) that evoked problem behavior during a functional analysis for two participants. Results suggested that escape from loud noises reinforced the problem behavior for one participant and escape from arguing reinforced problem behavior for the other participant. Noncontingent delivery of preferred music through sound-attenuating headphones decreased problem behavior without the use of extinction for both participants. We discuss the results in terms of the abolishing effects of the intervention. © 2018 Society for the Experimental Analysis of Behavior.
Kraus, Nina; Hornickel, Jane; Strait, Dana L; Slater, Jessica; Thompson, Elaine
2014-01-01
Children from disadvantaged backgrounds often face impoverished auditory environments, such as greater exposure to ambient noise and fewer opportunities to participate in complex language interactions during development. These circumstances increase their risk for academic failure and dropout. Given the academic and neural benefits associated with musicianship, music training may be one method for providing auditory enrichment to children from disadvantaged backgrounds. We followed a group of primary-school students from gang reduction zones in Los Angeles, CA, USA for 2 years as they participated in Harmony Project. By providing free community music instruction for disadvantaged children, Harmony Project promotes the healthy development of children as learners, the development of children as ambassadors of peace and understanding, and the development of stronger communities. Children who were more engaged in the music program-as defined by better attendance and classroom participation-developed stronger brain encoding of speech after 2 years than their less-engaged peers in the program. Additionally, children who were more engaged in the program showed increases in reading scores, while those less engaged did not show improvements. The neural gains accompanying music engagement were seen in the very measures of neural speech processing that are weaker in children from disadvantaged backgrounds. Our results suggest that community music programs such as Harmony Project provide a form of auditory enrichment that counteracts some of the biological adversities of growing up in poverty, and can further support community-based interventions aimed at improving child health and wellness.
Kraus, Nina; Hornickel, Jane; Strait, Dana L.; Slater, Jessica; Thompson, Elaine
2014-01-01
Children from disadvantaged backgrounds often face impoverished auditory environments, such as greater exposure to ambient noise and fewer opportunities to participate in complex language interactions during development. These circumstances increase their risk for academic failure and dropout. Given the academic and neural benefits associated with musicianship, music training may be one method for providing auditory enrichment to children from disadvantaged backgrounds. We followed a group of primary-school students from gang reduction zones in Los Angeles, CA, USA for 2 years as they participated in Harmony Project. By providing free community music instruction for disadvantaged children, Harmony Project promotes the healthy development of children as learners, the development of children as ambassadors of peace and understanding, and the development of stronger communities. Children who were more engaged in the music program—as defined by better attendance and classroom participation—developed stronger brain encoding of speech after 2 years than their less-engaged peers in the program. Additionally, children who were more engaged in the program showed increases in reading scores, while those less engaged did not show improvements. The neural gains accompanying music engagement were seen in the very measures of neural speech processing that are weaker in children from disadvantaged backgrounds. Our results suggest that community music programs such as Harmony Project provide a form of auditory enrichment that counteracts some of the biological adversities of growing up in poverty, and can further support community-based interventions aimed at improving child health and wellness. PMID:25566109
Ravaja, Niklas; Kallinen, Kari
2004-07-01
We examined the moderating influence of dispositional behavioral inhibition system (BIS) and behavioral activation system (BAS) sensitivities on the relationship of startling background music with emotion-related subjective and physiological responses elicited during reading news reports, and with memory performance among 26 adult men and women. Physiological parameters measured were respiratory sinus arrhythmia (RSA), electrodermal activity (EDA), and facial electromyography (EMG). The results showed that, among high BAS individuals, news stories with startling background music were rated as more interesting and elicited higher zygomatic EMG activity and RSA than news stories with non-startling music. Among low BAS individuals, news stories with startling background music were rated as less pleasant and more arousing and prompted higher EDA. No BIS-related effects or effects on memory were found. Startling background music may have adverse (e.g., negative arousal) or beneficial effects (e.g., a positive emotional state and stronger positive engagement) depending on dispositional BAS sensitivity of an individual. Actual or potential applications of this research include the personalization of media presentations when using modern media and communications technologies.
ERIC Educational Resources Information Center
Wolff, Florence I.
To determine the effect of background music during classroom instruction in vocabulary and grammar and in the delivery of speeches, sophomore high school students were divided into an experimental group (66 students) and a control group (60 students). For one semester the experimental group heard classical background music during instruction,…
NASA Astrophysics Data System (ADS)
Cook, Perry
This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.). Although most people would think that analog synthesizers and electronic music substantially predate the use of computers in music, many experiments and complete computer music systems were being constructed and used as early as the 1950s.
ERIC Educational Resources Information Center
Russell-Bowie, Deirdre
2010-01-01
This paper reports the findings of a study on pre-service teachers' background and confidence in music and visual arts education. The study involved 939 non-specialist pre-service primary teachers from five countries. Initially the paper identifies the students' perceptions of their background and confidence in relation to music and visual arts…
''1/f noise'' in music: Music from 1/f noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voss, R.F.; Clarke, J.
1978-01-01
The spectral density of fluctuations in the audio power of many musical selections and of English speech varies approximately as 1/f (f is the frequency) down to a frequency of 5 x 10/sup -4/ Hz. This result implies that the audio-power fluctuations are correlated over all times in the same manner as ''1/f noise'' in electronic components. The frequency fluctuations of music also have a 1/f spectral density at frequencies down to the inverse of the length of the piece of music. The frequency fluctuations of English speech have a quite different behavior, with a single characteristic time of aboutmore » 0.1 s, the average length of a syllable. The observations on music suggest that 1/f noise is a good choice for stochastic composition. Compositions in which the frequency and duration of each note were determined by 1/f noise sources sounded pleasing. Those generated by white-noise sources sounded too random, while those generated by 1/f/sup 2/ noise sounded too correlated.« less
ERIC Educational Resources Information Center
Ziv, Naomi; Goshen, Maya
2006-01-01
Children hear music in the background of a large variety of situations and activities. Throughout development, they acquire knowledge both about the syntactical norms of tonal music, and about the relationship between musical form and emotion. Five to six-year-old children heard a story, with a background "happy", "sad" or no…
ERIC Educational Resources Information Center
Kratus, John
2017-01-01
Active music listening is a creative activity in that the listener constructs a uniquely personal musical experience. Most approaches to teaching music listening emphasize a conceptual approach in which students learn to identify various characteristics of musical sound. Unfortunately, this type of listening is rarely done outside of schools. This…
Recognition and characterization of unstructured environmental sounds
NASA Astrophysics Data System (ADS)
Chu, Selina
2011-12-01
Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Elementary Curriculum Development.
The primary function of music education is the development of a responsiveness to the artistic qualities of sound. The constituent elements fundamental to musical response are rhythm, melody, harmony, form, expression, and style. With the goal of developing a responsiveness consisting of musicality and affective growth, this guide has been…
Einstein contra Aristotle: The sound from the heavens
NASA Astrophysics Data System (ADS)
Neves, J. C. S.
2017-09-01
In "On the Heavens" Aristotle criticizes the Pythagorean point of view which claims the existence of a cosmic music and a cosmic sound. According to the Pythagorean argument, there exists a cosmic music produced by stars and planets. These celestial bodies generate sound in its movements, and the music appears due to the cosmic harmony. For Aristotle, there is no sound produced by celestial bodies. Then, there is no music as well. However, recently, LIGO (Laser Interferometer Gravitational-Waves Observatory) has detected the gravitational waves predicted by Einstein. In some sense, a sound originated from black holes has been heard. That is, Einstein or the General Relativity and LIGO appear to be with the Pythagoreanism and against the master of the Lyceum.
Kinson, Rochelle Melina; Lim, Wen Phei; Rahman, Habeebul
2015-01-01
Musical hallucinations are a rare phenomenon that renders appropriate identification and treatment a challenge. This case series describes three women who presented with hearing complex, familiar melodies in the absence of external stimuli on a background of hearing impairment.
ERIC Educational Resources Information Center
Falcon, Evelyn
2017-01-01
The purpose of this study was to examine if there is any relationship on reading comprehension when background classical music is played in the setting of a 7th and 8th grade classroom. This study also examined if there was a statistically significant difference in test anxiety when listening to classical music while completing a test. Reading…
Music holographic physiotherapy by laser
NASA Astrophysics Data System (ADS)
Liao, Changhuan
1996-09-01
Based on the relationship between music and nature, the paper compares laser and light with music sound on the principles of synergetics, describes music physically and objectively, and proposes a music holographic therapy by laser. Maybe it will have certain effects on mechanism study and clinical practice of the music therapy.
Hearing the Music in the Spectrum of Hydrogen
ERIC Educational Resources Information Center
LoPresto, Michael C.
2016-01-01
Throughout a general education course on sound and light aimed at music and art students, analogies between subjective perceptions of objective properties of sound and light waves are a recurring theme. Demonstrating that the pitch and loudness of musical sounds are related to the frequency and intensity of a sound wave is simple and students are…
Fractal Music: The Mathematics Behind "Techno" Music
ERIC Educational Resources Information Center
Padula, Janice
2005-01-01
This article describes sound waves, their basis in the sine curve, Fourier's theorem of infinite series, the fractal equation and its application to the composition of music, together with algorithms (such as those employed by meteorologist Edward Lorenz in his discovery of chaos theory) that are now being used to compose fractal music on…
Music Learning in Schools: Perspectives of a New Foundation for Music Teaching and Learning
ERIC Educational Resources Information Center
Gruhn, Wilfried; Regelski, Thomas A., Ed.
2006-01-01
Does music education need a new philosophy that is scientifically grounded on common agreements with educational and musical standards? If such standards are commonly accepted, why do people reflect philosophically about music teaching and learning? At first glance, these questions sound very abstract and theoretical because people love music, and…
NASA Astrophysics Data System (ADS)
Aying, K. P.; Otadoy, R. E.; Violanda, R.
2015-06-01
This study investigates on the sound pressure level (SPL) of insert-type earphones that are commonly used for music listening of the general populace. Measurements of SPL from earphones of different respondents were measured by plugging the earphone to a physical ear canal model. Durations of the earphone used for music listening were also gathered through short interviews. Results show that 21% of the respondents exceed the standard loudness/duration relation recommended by the World Health Organization (WHO).
Well-Loved Music Robustly Relieves Pain: A Randomized, Controlled Trial
Hsieh, Christine; Kong, Jian; Kirsch, Irving; Edwards, Robert R.; Jensen, Karin B.; Kaptchuk, Ted J.; Gollub, Randy L.
2014-01-01
Music has pain-relieving effects, but its mechanisms remain unclear. We sought to verify previously studied analgesic components and further elucidate the underpinnings of music analgesia. Using a well-characterized conditioning-enhanced placebo model, we examined whether boosting expectations would enhance or interfere with analgesia from strongly preferred music. A two-session experiment was performed with 48 healthy, pain experiment-naïve participants. In a first cohort, 36 were randomized into 3 treatment groups, including music enhanced with positive expectancy, non-musical sound enhanced with positive expectancy, and no expectancy enhancement. A separate replication cohort of 12 participants received only expectancy-enhanced music following the main experiment to verify the results of expectancy-manipulation on music. Primary outcome measures included the change in subjective pain ratings to calibrated experimental noxious heat stimuli, as well as changes in treatment expectations. Without conditioning, expectations were strongly in favor of music compared to non-musical sound. While measured expectations were enhanced by conditioning, this failed to affect either music or sound analgesia significantly. Strongly preferred music on its own was as pain relieving as conditioning-enhanced strongly preferred music, and more analgesic than enhanced sound. Our results demonstrate the pain-relieving power of personal music even over enhanced expectations. Trial Information Clinicaltrials.gov NCT01835275. PMID:25211164
Najafi Ghezeljeh, Tahereh; Mohades Ardebili, Fatimah; Rafii, Forough; Haghani, Hamid
2016-01-01
This study aimed to investigate the effect of music on the background pain, anxiety, and relaxation levels in burn patients. In this pretest-posttest randomized controlled clinical trial, 100 hospitalized burn patients were selected through convenience sampling. Subjects randomly assigned to music and control groups. Data related to demographic and clinical characteristics, analgesics, and physiologic measures were collected by researcher-made tools. Visual analog scale was used to determine pain, anxiety, and relaxation levels before and after the intervention in 3 consecutive days. Patients' preferred music was offered once a day for 3 days. The control group only received routine care. Data were analyzed using SPSS-PC (V. 20.0). According to paired t-test, there were significant differences between mean scores of pain (P < .001), anxiety (P < .001), and relaxation (P < .001) levels before and after intervention in music group. Independent t-test indicated a significant difference between the mean scores of changes in pain, anxiety, and relaxation levels before and after intervention in music and control groups (P < .001). No differences were detected in the mean scores of physiologic measures between groups before and after music intervention. Music is an inexpensive, appropriate, and safe intervention for applying to burn patients with background pain and anxiety at rest. To produce more effective comfort for patients, it is necessary to compare different types and time lengths of music intervention to find the best approach.
Improving left spatial neglect through music scale playing.
Bernardi, Nicolò Francesco; Cioffi, Maria Cristina; Ronchi, Roberta; Maravita, Angelo; Bricolo, Emanuela; Zigiotto, Luca; Perucca, Laura; Vallar, Giuseppe
2017-03-01
The study assessed whether the auditory reference provided by a music scale could improve spatial exploration of a standard musical instrument keyboard in right-brain-damaged patients with left spatial neglect. As performing music scales involves the production of predictable successive pitches, the expectation of the subsequent note may facilitate patients to explore a larger extension of space in the left affected side, during the production of music scales from right to left. Eleven right-brain-damaged stroke patients with left spatial neglect, 12 patients without neglect, and 12 age-matched healthy participants played descending scales on a music keyboard. In a counterbalanced design, the participants' exploratory performance was assessed while producing scales in three feedback conditions: With congruent sound, no-sound, or random sound feedback provided by the keyboard. The number of keys played and the timing of key press were recorded. Spatial exploration by patients with left neglect was superior with congruent sound feedback, compared to both Silence and Random sound conditions. Both the congruent and incongruent sound conditions were associated with a greater deceleration in all groups. The frame provided by the music scale improves exploration of the left side of space, contralateral to the right hemisphere, damaged in patients with left neglect. Performing a scale with congruent sounds may trigger at some extent preserved auditory and spatial multisensory representations of successive sounds, thus influencing the time course of space scanning, and ultimately resulting in a more extensive spatial exploration. These findings offer new perspectives also for the rehabilitation of the disorder. © 2015 The British Psychological Society.
Math and Music: Harmonious Connections.
ERIC Educational Resources Information Center
Garland, Trudi Hammel; Kahn, Charity Vaughan
Mathematics can be used to analyze musical rhythms, to study the sound waves that produce musical notes, to explain why instruments are tuned, and to compose music. This book explores the relationship between mathematics and music through proportions, patterns, Fibonacci numbers or the Golden Ratio, geometric transformations, trigonometric…
Experimenting with brass musical instruments
NASA Astrophysics Data System (ADS)
Lo Presto, Michael C.
2003-07-01
With the aid of microcomputer hardware and software for the introductory physics laboratory, I have developed several experiments dealing with the properties of brass musical instruments that could be used when covering sound anywhere from an introductory physics laboratory to a course in musical acoustics, or even independent studies. The results of these experiments demonstrate in a quantitative fashion the effects of the mouthpiece and bell on the frequencies of the sound waves and thus the musical pitches produced. Most introductory sources only discuss these effects qualitatively.
An abstract approach to music.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.
1999-04-19
In this article we have outlined a formal framework for an abstract approach to music and music composition. The model is formulated in terms of objects that have attributes, obey relationships, and are subject to certain well-defined operations. The motivation for this approach uses traditional terms and concepts of music theory, but the approach itself is formal and uses the language of mathematics. The universal object is an audio wave; partials, sounds, and compositions are special objects, which are placed in a hierarchical order based on time scales. The objects have both static and dynamic attributes. When we realize amore » composition, we assign values to each of its attributes: a (scalar) value to a static attribute, an envelope and a size to a dynamic attribute. A composition is then a trajectory in the space of aural events, and the complex audio wave is its formal representation. Sounds are fibers in the space of aural events, from which the composer weaves the trajectory of a composition. Each sound object in turn is made up of partials, which are the elementary building blocks of any music composition. The partials evolve on the fastest time scale in the hierarchy of partials, sounds, and compositions. The ideas outlined in this article are being implemented in a digital instrument for additive sound synthesis and in software for music composition. A demonstration of some preliminary results has been submitted by the authors for presentation at the conference.« less
Marie, Céline; Kujala, Teija; Besson, Mireille
2012-04-01
The aim of this experiment was two-fold. Our first goal was to determine whether linguistic expertise influences the pre-attentive [as reflected by the Mismatch Negativity - (MMN)] and the attentive processing (as reflected by behavioural discrimination accuracy) of non-speech, harmonic sounds. The second was to directly compare the effects of linguistic and musical expertise. To this end, we compared non-musician native speakers of a quantity language, Finnish, in which duration is a phonemically contrastive cue, with French musicians and French non-musicians. Results revealed that pre-attentive and attentive processing of duration deviants was enhanced in Finn non-musicians and French musicians compared to French non-musicians. By contrast, MMN in French musicians was larger than in both Finns and French non-musicians for frequency deviants, whereas no between-group differences were found for intensity deviants. By showing similar effects of linguistic and musical expertise, these results argue in favor of common processing of duration in music and speech. Copyright © 2010 Elsevier Srl. All rights reserved.
Perception of music dynamics in concert hall acoustics.
Pätynen, Jukka; Lokki, Tapio
2016-11-01
Dynamics is one of the principal means of expressivity in Western classical music. Still, preceding research on room acoustics has mostly neglected the contribution of music dynamics to the acoustic perception. This study investigates how the different concert hall acoustics influence the perception of varying music dynamics. An anechoic orchestra signal, containing a step in music dynamics, was rendered in the measured acoustics of six concert halls at three seats in each. Spatial sound was reproduced through a loudspeaker array. By paired comparison, naive subjects selected the stimuli that they considered to change more during the music. Furthermore, the subjects described their foremost perceptual criteria for each selection. The most distinct perceptual factors differentiating the rendering of music dynamics between halls include the dynamic range, and varying width of sound and reverberance. The results confirm the hypothesis that the concert halls render the performed music dynamics differently, and with various perceptual aspects. The analysis against objective room acoustic parameters suggests that the perceived dynamic contrasts are pronounced by acoustics that provide stronger sound and more binaural incoherence by a lateral sound field. Concert halls that enhance the dynamics have been found earlier to elicit high subjective preference.
Music perception: sounds lost in space.
Stewart, Lauren; Walsh, Vincent
2007-10-23
A recent study of spatial processing in amusia makes a controversial claim that such musical deficits may be understood in terms of a problem in the representation of space. If such a link is demonstrated to be causal, it would challenge the prevailing view that deficits in amusia are specific to the musical or even the auditory domain.
Attitudes of college music students towards noise in youth culture.
Chesky, Kris; Pair, Marla; Lanford, Scott; Yoshimura, Eri
2009-01-01
The effectiveness of a hearing loss prevention program within a college may be dependent on attitudes among students majoring in music. The purpose of this study was to assess the attitudes of music majors toward noise and to compare them to students not majoring in music. Participants ( N = 467) filled out a questionnaire designed to assess attitudes toward noise in youth culture and attitudes toward influencing their sound environment. Results showed that students majoring in music have a healthier attitude toward sound compared to students not majoring in music. Findings also showed that music majors are more aware and attentive to noise in general, likely to perceive sound that may be risky to hearing as something negative, and are more likely to carry out behaviors to decrease personal exposure to loud sounds. Due to these differences, music majors may be more likely than other students to respond to and benefit from a hearing loss prevention program.
Nizamie, Shamsul Haque; Tikka, Sai Krishna
2014-01-01
Vocal and/or instrumental sounds combined in such a way as to produce beauty of form, harmony and expression of emotion is music. Brain, mind and music are remarkably related to each other and music has got a strong impact on psychiatry. With the advent of music therapy, as an efficient form of alternative therapy in treating major psychiatric conditions, this impact has been further strengthened. In this review, we deliberate upon the historical aspects of the relationship between psychiatry and music, neural processing underlying music, music's relation to classical psychology and psychopathology and scientific evidence base for music therapy in major psychiatric disorders. We highlight the role of Indian forms of music and Indian contribution to music therapy. PMID:24891698
ERIC Educational Resources Information Center
Rink, Otho P.
To investigate the effects of background music on perception and retention of a dramatic television presentation's cognitive content, 107 English literature students were randomly assigned to one of five background treatments for a play. Four of the videotaped presentations included background music; Shostakovich's Symphony No. 6; Japanese jazz;…
Tonal Language Background and Detecting Pitch Contour in Spoken and Musical Items
ERIC Educational Resources Information Center
Stevens, Catherine J.; Keller, Peter E.; Tyler, Michael D.
2013-01-01
An experiment investigated the effect of tonal language background on discrimination of pitch contour in short spoken and musical items. It was hypothesized that extensive exposure to a tonal language attunes perception of pitch contour. Accuracy and reaction times of adult participants from tonal (Thai) and non-tonal (Australian English) language…
Niedecken, D
1991-02-01
In presenting the case of a 12-15 year old boy with severe learning difficulties and antisocial tendencies the author reflects upon the process of musical enculturation in music therapy. The deployment of symbolical meaning through the therapeutic use of sound and music is described - from music as a self object up to the point where music is fully acknowledged as a cultural object. It is shown, how this process goes with the unfolding and working through of the transference relationship.
Durai, Mithila; Kobayashi, Kei; Searchfield, Grant D
2018-05-28
To evaluate the feasibility of predictable or unpredictable amplitude-modulated sounds for tinnitus therapy. The study consisted of two parts. (1) An adaptation experiment. Loudness level matches and rating scales (10-point) for loudness and distress were obtained at a silent baseline and at the end of three counterbalanced 30-min exposures (silence, predictable and unpredictable). (2) A qualitative 2-week sound therapy feasibility trial. Participants took home a personal music player (PMP). Part 1: 23 individuals with chronic tinnitus and part 2: seven individuals randomly selected from Part 1. Self-reported tinnitus loudness and annoyance were significantly lower than baseline ratings after acute unpredictable sound exposure. Tinnitus annoyance ratings were also significantly lower than the baseline but the effect was small. The feasibility trial identified that participant preferences for sounds varied. Three participants did not obtain any benefit from either sound. Three participants preferred unpredictable compared to predictable sounds. Some participants had difficulty using the PMP, the average self-report hours of use were low (less <1 h/day). Unpredictable surf-like sounds played using a PMP is a feasible tinnitus treatment. Further work is required to improve the acceptance of the sound and ease of PMP use.
Music Videos: The Look of the Sound
ERIC Educational Resources Information Center
Aufderheide, Pat
1986-01-01
Asserts that music videos, rooted in mass marketing culture, are reshaping the language of advertising, affecting the flow of information. Raises question about the society that creates and receives music videos. (MS)
High school music classes enhance the neural processing of speech.
Tierney, Adam; Krizman, Jennifer; Skoe, Erika; Johnston, Kathleen; Kraus, Nina
2013-01-01
Should music be a priority in public education? One argument for teaching music in school is that private music instruction relates to enhanced language abilities and neural function. However, the directionality of this relationship is unclear and it is unknown whether school-based music training can produce these enhancements. Here we show that 2 years of group music classes in high school enhance the neural encoding of speech. To tease apart the relationships between music and neural function, we tested high school students participating in either music or fitness-based training. These groups were matched at the onset of training on neural timing, reading ability, and IQ. Auditory brainstem responses were collected to a synthesized speech sound presented in background noise. After 2 years of training, the neural responses of the music training group were earlier than at pre-training, while the neural timing of students in the fitness training group was unchanged. These results represent the strongest evidence to date that in-school music education can cause enhanced speech encoding. The neural benefits of musical training are, therefore, not limited to expensive private instruction early in childhood but can be elicited by cost-effective group instruction during adolescence.
Interaction between DRD2 variation and sound environment on mood and emotion-related brain activity.
Quarto, T; Fasano, M C; Taurisano, P; Fazio, L; Antonucci, L A; Gelao, B; Romano, R; Mancini, M; Porcelli, A; Masellis, R; Pallesen, K J; Bertolino, A; Blasi, G; Brattico, E
2017-01-26
Sounds, like music and noise, are capable of reliably affecting individuals' mood and emotions. However, these effects are highly variable across individuals. A putative source of variability is genetic background. Here we explored the interaction between a functional polymorphism of the dopamine D2 receptor gene (DRD2 rs1076560, G>T, previously associated with the relative expression of D2S/L isoforms) and sound environment on mood and emotion-related brain activity. Thirty-eight healthy subjects were genotyped for DRD2 rs1076560 (G/G=26; G/T=12) and underwent functional magnetic resonance imaging (fMRI) during performance of an implicit emotion-processing task while listening to music or noise. Individual variation in mood induction was assessed before and after the task. Results showed mood improvement after music exposure in DRD2GG subjects and mood deterioration after noise exposure in GT subjects. Moreover, the music, as opposed to noise environment, decreased the striatal activity of GT subjects as well as the prefrontal activity of GG subjects while processing emotional faces. These findings suggest that genetic variability of dopamine receptors affects sound environment modulations of mood and emotion processing. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Frese, Millie K., Ed.
1999-01-01
This theme issue of "The Goldfinch" focuses on music as an art using sound in time to express ideas and emotions and contains articles featuring appreciations of some of Iowa's renowned musical artists. The first article gives an overview of music in Iowa's history. The next article describes Antonin Dvorak's summer sojourn in Spillville…
Atyeo, J; Sanderson, P M
2015-07-01
The melodic alarm sound set for medical electrical equipment that was recommended in the International Electrotechnical Commission's IEC 60601-1-8 standard has proven difficult for clinicians to learn and remember, especially clinicians with little prior formal music training. An alarm sound set proposed by Patterson and Edworthy in 1986 might improve performance for such participants. In this study, 31 critical and acute care nurses with less than one year of formal music training identified alarm sounds while they calculated drug dosages. Sixteen nurses used the IEC and 15 used the Patterson-Edworthy alarm sound set. The mean (SD) percentage of alarms correctly identified by nurses was 51.3 (25.6)% for the IEC alarm set and 72.1 (18.8)% for the Patterson-Edworthy alarms (p = 0.016). Nurses using the Patterson-Edworthy alarm sound set reported that it was easier to distinguish between alarm sounds than did nurses using the IEC alarm sound set (p = 0.015). Principles used to construct the Patterson-Edworthy alarm sounds should be adopted for future alarm sound sets. © 2015 The Association of Anaesthetists of Great Britain and Ireland.
Music-evoked emotions in schizophrenia.
Abe, Daijyu; Arai, Makoto; Itokawa, Masanari
2017-07-01
Previous studies have reported that people with schizophrenia have impaired musical abilities. Here we developed a simple music-based assay to assess patient's ability to associate a minor chord with sadness. We further characterize correlations between impaired musical responses and psychiatric symptoms. We exposed participants sequentially to two sets of sound stimuli, first a C-major progression and chord, and second a C-minor progression and chord. Participants were asked which stimulus they associated with sadness, the first set, the second set, or neither. The severity of psychiatric symptoms was assessed using the Positive and Negative Syndrome Scale (PANSS). Study participants were 29 patients diagnosed with schizophrenia and 29 healthy volunteers matched in age, gender and musical background. 37.9% (95% confidence interval [CI]:19.1-56.7) of patients with schizophrenia associated the minor chord set as sad, compared with 97.9% (95%CI: 89.5-103.6) of controls. Four patients were diagnosed with treatment-resistant schizophrenia, and all four failed to associate the minor chord with sadness. Patients who did not recognize minor chords as sad had significantly higher scores on all PANSS subscales. A simple test allows music-evoked emotions to be assessed in schizophrenia patient, and may show potential relationships between music-evoked emotions and psychiatric symptoms. Copyright © 2016. Published by Elsevier B.V.
Pitch features of environmental sounds
NASA Astrophysics Data System (ADS)
Yang, Ming; Kang, Jian
2016-07-01
A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.
Playing Music for a Smarter Ear: Cognitive, Perceptual and Neurobiological Evidence
Strait, Dana; Kraus, Nina
2012-01-01
Human hearing depends on a combination of cognitive and sensory processes that function by means of an interactive circuitry of bottom-up and top-down neural pathways, extending from the cochlea to the cortex and back again. Given that similar neural pathways are recruited to process sounds related to both music and language, it is not surprising that the auditory expertise gained over years of consistent music practice fine-tunes the human auditory system in a comprehensive fashion, strengthening neurobiological and cognitive underpinnings of both music and speech processing. In this review we argue not only that common neural mechanisms for speech and music exist, but that experience in music leads to enhancements in sensory and cognitive contributors to speech processing. Of specific interest is the potential for music training to bolster neural mechanisms that undergird language-related skills, such as reading and hearing speech in background noise, which are critical to academic progress, emotional health, and vocational success. PMID:22993456
Music Researchers' Musical Engagement
ERIC Educational Resources Information Center
Wollner, Clemens; Ginsborg, Jane; Williamon, Aaron
2011-01-01
There is an increasing awareness of the importance of reflexivity across various disciplines, which encourages researchers to scrutinize their research perspectives. In order to contextualize and reflect upon research in music, this study explores the musical background, current level of musical engagement and the listening habits of music…
Young Children's Perceptions of the Dimensions of Sound.
ERIC Educational Resources Information Center
McMahon, Olive
School children frequently fail to adequately understand terms associated with musical pitch although research shows that even infants with normal hearing can perceptually discriminate fine pitch variations. This study investigated children's perceptions of dimensions of sound by focusing on their choice of musical sounds and relevant…
ERIC Educational Resources Information Center
Fassbender, Eric; Richards, Deborah; Bilgin, Ayse; Thompson, William Forde; Heiden, Wolfgang
2012-01-01
Game technology has been widely used for educational applications, however, despite the common use of background music in games, its effect on learning has been largely unexplored. This paper discusses how music played in the background of a computer-animated history lesson affected participants' memory for facts. A virtual history lesson was…
Music Teachers and Music Therapists: Helping Children Together.
ERIC Educational Resources Information Center
Patterson, Allyson
2003-01-01
Provides background information on music therapy. Discusses how music therapy works in the public school setting and offers advice to music teachers. Explores music therapy and the Individuals with Disabilities Education Act, addressing the benefits of having access to music therapists. (CMK)
Experimenting with musical intervals
NASA Astrophysics Data System (ADS)
Lo Presto, Michael C.
2003-07-01
When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.
Musical Interfaces: Visualization and Reconstruction of Music with a Microfluidic Two-Phase Flow
Mak, Sze Yi; Li, Zida; Frere, Arnaud; Chan, Tat Chuen; Shum, Ho Cheung
2014-01-01
Detection of sound wave in fluids can hardly be realized because of the lack of approaches to visualize the very minute sound-induced fluid motion. In this paper, we demonstrate the first direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interfaces respond to sound of different frequency and amplitude robustly with sufficiently precise time resolution for the recording of musical notes and even subsequent reconstruction with high fidelity. Our work shows the possibility of sensing and transmitting vibrations as tiny as those induced by sound. This robust control of the interfacial dynamics enables a platform for investigating the mechanical properties of microstructures and for studying frequency-dependent phenomena, for example, in biological systems. PMID:25327509
Relaxing music counters heightened consolidation of emotional memory.
Rickard, Nikki S; Wong, Wendy Wing; Velik, Lauren
2012-02-01
Emotional events tend to be retained more strongly than other everyday occurrences, a phenomenon partially regulated by the neuromodulatory effects of arousal. Two experiments demonstrated the use of relaxing music as a means of reducing arousal levels, thereby challenging heightened long-term recall of an emotional story. In Experiment 1, participants (N=84) viewed a slideshow, during which they listened to either an emotional or neutral narration, and were exposed to relaxing or no music. Retention was tested 1 week later via a forced choice recognition test. Retention for both the emotional content (Phase 2 of the story) and material presented immediately after the emotional content (Phase 3) was enhanced, when compared with retention for the neutral story. Relaxing music prevented the enhancement for material presented after the emotional content (Phase 3). Experiment 2 (N=159) provided further support to the neuromodulatory effect of music by post-event presentation of both relaxing music and non-relaxing auditory stimuli (arousing music/background sound). Free recall of the story was assessed immediately afterwards and 1 week later. Relaxing music significantly reduced recall of the emotional story (Phase 2). The findings provide further insight into the capacity of relaxing music to attenuate the strength of emotional memory, offering support for the therapeutic use of music for such purposes. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Capstick, J. W.
2013-01-01
1. The nature of sound; 2. Elasticity and vibrations; 3. Transverse waves; 4. Longitudinal waves; 5. Velocity of longitudinal waves; 6. Reflection and refraction. Doppler's principle; 7. Interference. Beats. Combination tones; 8. Resonance and forced vibrations; 9. Quality of musical notes; 10. Organ pipes; 11. Rods. Plates. Bells; 12. Acoustical measurements; 13. The phonograph, microphone and telephone; 14. Consonance; 15. Definition of intervals. Scales. Temperament; 16. Musical instruments; 17. Application of acoustical principles to military purposes; Questions; Answers to questions; Index.
Moran, Michelle; Rousset, Alexandra; Looi, Valerie
2016-01-01
To explore the music appreciation of prelingually deaf adults using cochlear implants (CIs). Cohort study. Adult CI recipients were recruited based on hearing history and asked to complete the University of Canterbury Music Listening Questionnaire (UCMLQ) to assess each individual's music listening and appreciation. Results were compared to previous responses to the UCMLQ from a large cohort of postlingually deaf CI recipients. Fifteen prelingually deaf and 15 postlingually deaf adult cochlear implant recipients. No significant differences were found between the prelingual and postlingual participants for amount of music listening or music listening enjoyment with their CI. Sound quality of common instruments was favourable for both groups, with no significant difference in the pleasantness/naturalness of instrument sounds between the groups. Prelingually deaf CI recipients rated themselves as significantly less able to follow a melody line and identify instrument styles compared to their postlingual peers. The results suggest that the pre- and postlingually deaf CI recipients demonstrate equivalent levels of music appreciation. This finding is of clinical importance, as CI clinicians should be actively encouraging all of their recipients to explore music listening as a part of their rehabilitation.
Phonological Awareness and Musical Aptitude.
ERIC Educational Resources Information Center
Peynircioglu, Zehra F.; Durgunoglu, Aydyn Y.; Oney-Kusefoglu, Banu
2002-01-01
Examines the relationship between phonological awareness and musical aptitude in pre-school Turkish and American children. Finds that children in the high musical aptitude group did much better on all tasks than those in the low musical aptitude group, showing that success in manipulating linguistic sounds was related to awareness of distinct…
A Comparative Analysis of the Universal Elements of Music and the Fetal Environment
Teie, David
2016-01-01
Although the idea that pulse in music may be related to human pulse is ancient and has recently been promoted by researchers (Parncutt, 2006; Snowdon and Teie, 2010), there has been no ordered delineation of the characteristics of music that are based on the sounds of the womb. I describe features of music that are based on sounds that are present in the womb: tempo of pulse (pulse is understood as the regular, underlying beat that defines the meter), amplitude contour of pulse, meter, musical notes, melodic frequency range, continuity, syllabic contour, melodic rhythm, melodic accents, phrase length, and phrase contour. There are a number of features of prenatal development that allow for the formation of long-term memories of the sounds of the womb in the areas of the brain that are responsible for emotions. Taken together, these features and the similarities between the sounds of the womb and the elemental building blocks of music allow for a postulation that the fetal acoustic environment may provide the bases for the fundamental musical elements that are found in the music of all cultures. This hypothesis is supported by a one-to-one matching of the universal features of music with the sounds of the womb: (1) all of the regularly heard sounds that are present in the fetal environment are represented in the music of every culture, and (2) all of the features of music that are present in the music of all cultures can be traced to the fetal environment. PMID:27555828
Transformations: Technology and the Music Industry.
ERIC Educational Resources Information Center
Peters, G. David
2001-01-01
Focuses on the companies and organizations of the Music Industry Conference (MIC). Addresses topics such as: changes in companies due to technology, audio compact discs, the music instrument digital interface (MIDI) , digital sound recording, and the MIC on-line music instruction programs offered. (CMK)
ERIC Educational Resources Information Center
Rajan, Rekha S.
2010-01-01
Providing opportunity for musical exploration is essential to any early childhood program. Through music making, children are actively engaged with their senses: they listen to the complex sounds around them, move their bodies to the rhythms, and touch and feel the textures and shapes of the instruments. The inimitable strength of the Montessori…
Küssner, Mats B
2017-01-01
The question of whether background music is able to enhance cognitive task performance is of interest to scholars, educators, and stakeholders in business alike. Studies have shown that background music can have beneficial, detrimental or no effects on cognitive task performance. Extraversion-and its postulated underlying cause, cortical arousal-is regarded as an important factor influencing the outcome of such studies. According to Eysenck's theory of personality, extraverts' cortical arousal at rest is lower compared to that of introverts. Scholars have thus hypothesized that extraverts should benefit from background music in cognitive tasks, whereas introverts' performance should decline with music in the background. Reviewing studies that have considered extraversion as a mediator of the effect of background music on cognitive task performance, it is demonstrated that there is as much evidence in favor as there is against Eysenck's theory of personality. Further, revisiting Eysenck's concept of cortical arousal-which has traditionally been assessed by activity in the EEG alpha band-and reviewing literature on the link between extraversion and cortical arousal, it is revealed that there is conflicting evidence. Due to Eysenck's focus on alpha power, scholars have largely neglected higher frequency bands in the EEG signal as indicators of cortical arousal. Based on recent findings, it is suggested that beta power might not only be an indicator of alertness and attention but also a predictor of cognitive task performance. In conclusion, it is proposed that focused music listening prior to cognitive tasks might be a more efficient way to boost performance than listening to background music during cognitive tasks.
Küssner, Mats B.
2017-01-01
The question of whether background music is able to enhance cognitive task performance is of interest to scholars, educators, and stakeholders in business alike. Studies have shown that background music can have beneficial, detrimental or no effects on cognitive task performance. Extraversion—and its postulated underlying cause, cortical arousal—is regarded as an important factor influencing the outcome of such studies. According to Eysenck's theory of personality, extraverts' cortical arousal at rest is lower compared to that of introverts. Scholars have thus hypothesized that extraverts should benefit from background music in cognitive tasks, whereas introverts' performance should decline with music in the background. Reviewing studies that have considered extraversion as a mediator of the effect of background music on cognitive task performance, it is demonstrated that there is as much evidence in favor as there is against Eysenck's theory of personality. Further, revisiting Eysenck's concept of cortical arousal—which has traditionally been assessed by activity in the EEG alpha band—and reviewing literature on the link between extraversion and cortical arousal, it is revealed that there is conflicting evidence. Due to Eysenck's focus on alpha power, scholars have largely neglected higher frequency bands in the EEG signal as indicators of cortical arousal. Based on recent findings, it is suggested that beta power might not only be an indicator of alertness and attention but also a predictor of cognitive task performance. In conclusion, it is proposed that focused music listening prior to cognitive tasks might be a more efficient way to boost performance than listening to background music during cognitive tasks. PMID:29184523
Milovanov, Riia; Huotilainen, Minna; Esquef, Paulo A A; Alku, Paavo; Välimäki, Vesa; Tervaniemi, Mari
2009-08-28
We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.
Development of Prototype of Whistling Sound Counter based on Piezoelectric Bone Conduction
NASA Astrophysics Data System (ADS)
Mori, Mikio; Ogihara, Mitsuhiro; Kyuu, Ten; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro
Recently, some professional whistlers have set up music schools that teach musical whistling. Similar to singing, in musical whistling, the whistling sound should not be break, even when the whistling goes on for more than 3 min. For this, it is advisable to practice whistling the “Pii” sound, which involves whistling the “Pii” sound continuously 100 times with the same pitch. However, when practicing alone, a whistler finds it difficult to count his/her own whistling sounds. In this paper, we propose a whistling sound counter based on piezoelectric bone conduction. This system consists of five parts. The gain of the amplifier section of this counter is variable, and the center frequency (f0) of the BPF part is also variable. In this study, we developed a prototype of the system and tested it. For this, we simultaneously counted the whistling sounds of nine people using the proposed system. The proposed system showed a good performance in a noisy environment. We also propose an examination system for awarding grades in musical whistling, which enforces the license examination in musical whistling on the personal computer. The proposed system can be used to administer the 5th grade exam for musical whistling.
Music and Computers: Symbiotic Learning.
ERIC Educational Resources Information Center
Crenshaw, John H.
Many individuals in middle school, high school, and university settings have an interest in both music and computers. This paper seeks to direct that interest by presenting a series of computer programming projects. The 53 projects fall under two categories: musical scales and musical sound production. Each group of projects is preceded by a short…
The use of music on Barney & Friends: implications for music therapy practice and research.
McGuire , K M
2001-01-01
This descriptive study examined the music content of 88 episodes from the PBS television show Barney & Friends, which aired from September 1992 to September 1998, in an attempt to quantify musical examples and presentations that may be considered introductory music experiences for preschoolers. Using many of the procedures identified by Wolfe and Stambaugh (1993) in their study on the music of Sesame Street, 25% of Barney & Friends' 88 episodes were analyzed by using the computer observation program SCRIBE in determining: (a) the temporal use of music; (b) performance medium; and (c) intention of music use. Furthermore, each structural prompt presentation (n = 749) from all 88 episodes was examined for: (a) tempo; (b) vocal range; (c) music style; (d) word clarity; (e) repetition; (f) vocal modeling; and (g) movement. Results revealed that the show contained more music (92.2%) than nonmusic (7.8%), with the majority of this music containing instrumental sounds (61%). The function of this music was distributed equally between structural prompt music (48%) and background music (48%). The majority of the structural prompt music contained newly composed material (52%), while 33% consisted of previously composed material. Fifteen percent contained a combination of newly composed and previously composed material. The most common tempo range for presentations on the show was 80-100 bpm, while vocal ranges of a 9th, 8th, 6th, and 7th were predominant and most often sung by children's voices. The adult male voice was also common, with 84% of all adult vocals being male. The tessitura category with the greatest number of appearances was middle C to C above (n = 133), with the majority of the presentations (n = 435, 73%) extending singers' voices over the register lift of B above middle C. Children's music and music of the American heritage were the most common style categories observed, and these two categories combined on 260 (35%) presentations. The use of choreographed
Encountering Complexity: Native Musics in the Curriculum.
ERIC Educational Resources Information Center
Boyea, Andrea
1999-01-01
Describes Native American musics, focusing on issues such as music and the experience of time, metaphor and metaphorical aspects, and spirituality and sounds from nature. Discusses Native American metaphysics and its reflection in the musics. States that an effective curriculum would provide a new receptivity to Native American musics. (CMK)
Choice and Effects of Instrument Sound in Aural Training
ERIC Educational Resources Information Center
Loh, Christian Sebastian
2007-01-01
A musical note produced through the vibration of a single string is psychoacoustically simpler/purer than that produced via multiple-strings vibration. Does the psychoacoustics of instrument sound have any effect on learning outcomes in music instruction? This study investigated the effect of two psychoacoustically distinct instrument sounds on…
Musical Hypnosis: Sound and Selfhood from Mesmerism to Brainwashing
Kennaway, James
2012-01-01
Summary Music has long been associated with trance states, but very little has been written about the modern western discussion of music as a form of hypnosis or ‘brainwashing’. However, from Mesmer's use of the glass armonica to the supposed dangers of subliminal messages in heavy metal, the idea that music can overwhelm listeners' self-control has been a recurrent theme. In particular, the concepts of automatic response and conditioned reflex have been the basis for a model of physiological psychology in which the self has been depicted as vulnerable to external stimuli such as music. This article will examine the discourse of hypnotic music from animal magnetism and the experimental hypnosis of the nineteenth century to the brainwashing panics since the Cold War, looking at the relationship between concerns about hypnotic music and the politics of the self and sexuality.
Effects of culture on musical pitch perception.
Wong, Patrick C M; Ciocca, Valter; Chan, Alice H D; Ha, Louisa Y Y; Tan, Li-Hai; Peretz, Isabelle
2012-01-01
The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association--the influence of linguistic background on music pitch processing and disorders--remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means 'teacher' and 'to try' when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture
Effects of Culture on Musical Pitch Perception
Wong, Patrick C. M.; Ciocca, Valter; Chan, Alice H. D.; Ha, Louisa Y. Y.; Tan, Li-Hai; Peretz, Isabelle
2012-01-01
The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association—the influence of linguistic background on music pitch processing and disorders—remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means ‘teacher’ and ‘to try’ when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture
Linking prenatal experience to the emerging musical mind.
Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E
2013-09-03
The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.
Black Music: Sound and Feeling for Black Liberation
ERIC Educational Resources Information Center
McClendon, William H.
1976-01-01
Focuses on contemporary black music and the assortment of persons who produce it noting that black music is one area where black people provide their definitions and make their own judgements. (Author/AM)
van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E
2016-01-01
Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.
NASA Astrophysics Data System (ADS)
Ohuchi, Yoshito; Nakazono, Yoichi
2014-06-01
We have developed a water musical instrument that generates sound by the falling of water drops within resonance tubes. The instrument can give people who hear it the healing effect inherent in the sound of water. The sound produced by falling water drops arises from air- bubble vibrations. To investigate the impact of water depth on the air-bubble vibrations, we conducted experiments at varying values of water pressure and nozzle shape. We found that air-bubble vibration frequency does not change at a water depth of 50 mm or greater. Between 35 and 40 mm, however, the frequency decreases. At water depths of 30 mm or below, the air-bubble vibration frequency increases. In our tests, we varied the nozzle diameter from 2 to 4 mm. In addition, we discovered that the time taken for air-bubble vibration to start after the water drops start falling is constant at water depths of 40 mm or greater, but slower at depths below 40 mm.
Human-based percussion and self-similarity detection in electroacoustic music
NASA Astrophysics Data System (ADS)
Mills, John Anderson, III
Electroacoustic music is music that uses electronic technology for the compositional manipulation of sound, and is a unique genre of music for many reasons. Analyzing electroacoustic music requires special measures, some of which are integrated into the design of a preliminary percussion analysis tool set for electroacoustic music. This tool set is designed to incorporate the human processing of music and sound. Models of the human auditory periphery are used as a front end to the analysis algorithms. The audio properties of percussivity and self-similarity are chosen as the focus because these properties are computable and informative. A collection of human judgments about percussion was undertaken to acquire clearly specified, sound-event dimensions that humans use as a percussive cue. A total of 29 participants was asked to make judgments about the percussivity of 360 pairs of synthesized snare-drum sounds. The grouped results indicate that of the dimensions tested rise time is the strongest cue for percussivity. String resonance also has a strong effect, but because of the complex nature of string resonance, it is not a fundamental dimension of a sound event. Gross spectral filtering also has an effect on the judgment of percussivity but the effect is weaker than for rise time and string resonance. Gross spectral filtering also has less effect when the stronger cue of rise time is modified simultaneously. A percussivity-profile algorithm (PPA) is designed to identify those instants in pieces of music that humans also would identify as percussive. The PPA is implemented using a time-domain, channel-based approach and psychoacoustic models. The input parameters are tuned to maximize performance at matching participants' choices in the percussion-judgment collection. After the PPA is tuned, the PPA then is used to analyze pieces of electroacoustic music. Real electroacoustic music introduces new challenges for the PPA, though those same challenges might affect
Music training alters the course of adolescent auditory development.
Tierney, Adam T; Krizman, Jennifer; Kraus, Nina
2015-08-11
Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.
Music training alters the course of adolescent auditory development
Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina
2015-01-01
Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739
Musical anhedonia: selective loss of emotional experience in listening to music.
Satoh, Masayuki; Nakase, Taizen; Nagata, Ken; Tomimoto, Hidekazu
2011-10-01
Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.
Gasenzer, E R; Neugebauer, E A M
2014-12-01
Purpose of this essay is to provide a historical overview how music has dealt with the emotion and sensation of pain, as well as an overview over the more recent medical research into the relationship of music and pain. Since the beginnings of western music humans have put their emotions into musical sounds. During the baroque era, composers developed musical styles that expressed human emotions and our experiences of nature. In some compositions, like in operas, we find musical representations of pain. During Romanticism artists began to intrude into the soul of their audience. New expressive harmonies and styles touch the soul and the consciousness of the listener. With the inception of atonality dissonant sounds where experienced as a physical pain.The physiology of deep brain structures (like thalamus, hypothalamus or limbic system) and the physiology of the acoustic pathway process consonant and dissonant sound and musical perceptions in ways, that are similar to the perception of pain. In the thalamus and in the limbic system music and pain meet.The relationships of music and pain is a wide open research field with such interesting questions as the role of dopamine in the perception of consonant or dissonant music, or the processing of pain during music listening. Musicology has not yet embarked on a general investigation of how musical compositions express pain and how that has developed or changed over the centuries. Music therapy, neuro-musicology and the performing arts medicine are scientific fields that offer a lot of ideas for medical and musical research projects. © Georg Thieme Verlag KG Stuttgart · New York.
ERIC Educational Resources Information Center
Matthews, Wendy K.; Koner, Karen
2017-01-01
The focus of this exploratory study was to examine the current trends of K-12 music educators in the United States regarding their (a) professional background, (b) classroom teaching responsibilities, and (c) job satisfaction. Participants included seven thousand four hundred and sixty-three (N = 7,463) currently employed music teachers who were…
Johnson, Julene K; Chow, Maggie L
2016-01-01
Music is a complex acoustic signal that relies on a number of different brain and cognitive processes to create the sensation of hearing. Changes in hearing function are generally not a major focus of concern for persons with a majority of neurodegenerative diseases associated with dementia, such as Alzheimer disease (AD). However, changes in the processing of sounds may be an early, and possibly preclinical, feature of AD and other neurodegenerative diseases. The aim of this chapter is to review the current state of knowledge concerning hearing and music perception in persons who have a dementia as a result of a neurodegenerative disease. The review focuses on both peripheral and central auditory processing in common neurodegenerative diseases, with a particular focus on the processing of music and other non-verbal sounds. The chapter also reviews music interventions used for persons with neurodegenerative diseases. PMID:25726296
Music Perception with Cochlear Implants: A Review
McDermott, Hugh J.
2004-01-01
The acceptance of cochlear implantation as an effective and safe treatment for deafness has increased steadily over the past quarter century. The earliest devices were the first implanted prostheses found to be successful in compensating partially for lost sensory function by direct electrical stimulation of nerves. Initially, the main intention was to provide limited auditory sensations to people with profound or total sensorineural hearing impairment in both ears. Although the first cochlear implants aimed to provide patients with little more than awareness of environmental sounds and some cues to assist visual speech-reading, the technology has advanced rapidly. Currently, most people with modern cochlear implant systems can understand speech using the device alone, at least in favorable listening conditions. In recent years, an increasing research effort has been directed towards implant users’ perception of nonspeech sounds, especially music. This paper reviews that research, discusses the published experimental results in terms of both psychophysical observations and device function, and concludes with some practical suggestions about how perception of music might be enhanced for implant recipients in the future. The most significant findings of past research are: (1) On average, implant users perceive rhythm about as well as listeners with normal hearing; (2) Even with technically sophisticated multiple-channel sound processors, recognition of melodies, especially without rhythmic or verbal cues, is poor, with performance at little better than chance levels for many implant users; (3) Perception of timbre, which is usually evaluated by experimental procedures that require subjects to identify musical instrument sounds, is generally unsatisfactory; (4) Implant users tend to rate the quality of musical sounds as less pleasant than listeners with normal hearing; (5) Auditory training programs that have been devised specifically to provide implant users with
Music perception with cochlear implants: a review.
McDermott, Hugh J
2004-01-01
The acceptance of cochlear implantation as an effective and safe treatment for deafness has increased steadily over the past quarter century. The earliest devices were the first implanted prostheses found to be successful in compensating partially for lost sensory function by direct electrical stimulation of nerves. Initially, the main intention was to provide limited auditory sensations to people with profound or total sensorineural hearing impairment in both ears. Although the first cochlear implants aimed to provide patients with little more than awareness of environmental sounds and some cues to assist visual speech-reading, the technology has advanced rapidly. Currently, most people with modern cochlear implant systems can understand speech using the device alone, at least in favorable listening conditions. In recent years, an increasing research effort has been directed towards implant users' perception of nonspeech sounds, especially music. This paper reviews that research, discusses the published experimental results in terms of both psychophysical observations and device function, and concludes with some practical suggestions about how perception of music might be enhanced for implant recipients in the future. The most significant findings of past research are: (1) On average, implant users perceive rhythm about as well as listeners with normal hearing; (2) Even with technically sophisticated multiple-channel sound processors, recognition of melodies, especially without rhythmic or verbal cues, is poor, with performance at little better than chance levels for many implant users; (3) Perception of timbre, which is usually evaluated by experimental procedures that require subjects to identify musical instrument sounds, is generally unsatisfactory; (4) Implant users tend to rate the quality of musical sounds as less pleasant than listeners with normal hearing; (5) Auditory training programs that have been devised specifically to provide implant users with
ERIC Educational Resources Information Center
Matsunobu, Koji
2011-01-01
Ethnomusicologists and music educators are in broad agreement that what makes each cultural expression of music unique are differences, not commonalities, and that these should be understood in culturally sensitive ways. Relevant to the debate was the emphasis on the socio-cultural context of music making over the traditional "sound-only"…
Replacing the Orchestra? – The Discernibility of Sample Library and Live Orchestra Sounds
Wolf, Anna; Platz, Friedrich; Mons, Jan
2016-01-01
Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons. PMID:27382932
Schools of music and conservatories and hearing loss prevention.
Chesky, Kris
2011-03-01
Music students are not being taught that music is a sound source capable of harming hearing. Ensemble directors of public school and college bands, orchestras, and choirs, are unaware and unprepared to recognize and manage risk from excessive sound exposures. Schools of music and conservatories around the world, and the organizations that accredit them, need to embrace the idea that schools of music are best suited to facilitate change, conduct research, create and impart knowledge, institute competency, and most importantly, cultivate a culture of responsibility and accountability throughout the music discipline. By drawing attention to actions pursued at and through the College of Music at the University of North Texas, the purpose of this paper is to encourage change and to assist others in efforts to reach the best conditions for preventing irreversible hearing disorders associated with music.
Emotions evoked by the sound of music: characterization, classification, and measurement.
Zentner, Marcel; Grandjean, Didier; Scherer, Klaus R
2008-08-01
One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.
Application of a Musical Whistling Certificate Examination System as a Group Examination
NASA Astrophysics Data System (ADS)
Mori, Mikio; Ogihara, Mitsuhiro; Sugahara, Shin-Ichi; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro
Recently, some professional whistlers have set up music schools to teach musical whistling. However, so far, there is no licensed examination for musical whistling. In this paper, we propose an examination system for evaluating musical whistling. The system conducts an examination in musical whistling on a personal computer (PC). It can be used to award four grades, from the second to the fifth. These grades are designed according to the standards adopted by the school for musical whistling established by the Japanese professional whistler Moku-San. It is expected that the group examination of this examination is held in the examination center where other general certification examinations are held. Thus, the influence of the whistle sound on the PC microphone normally used should be considered. For this purpose, we examined the feasibility of using a bone-conductive microphone for a musical whistling certificate examination system. This paper shows that the proposed system in which bone-transmitted sounds are considered gives good performance under a noisy environment, as demonstrated in a group examination of musical whistling using bone-transmitted sounds. The timing of a candidates whistling tends to not match because the applause sound output from the PC was inaudible for a person older than 60 years.
Sound Health: Music Gets You Moving and More
... also be used to help young people with behavior disorders learn ways to manage their emotions. Robb’s research focuses on developing and testing music therapy interventions for children and teens with cancer and their families. In one study, music therapists helped young people undergoing high-risk ...
ERIC Educational Resources Information Center
Crawford, Renée
2017-01-01
This article reports the findings of a case study that investigated the impact of music education on students in an F-12 school in Victoria, Australia that is considered as having a high percentage of young people with a refugee background. Key findings from this research indicated that music education had a positive impact on this group of young…
Modular and Adaptive Control of Sound Processing
NASA Astrophysics Data System (ADS)
van Nort, Douglas
This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis
Changing the Tune: Listeners Like Music that Expresses a Contrasting Emotion
Schellenberg, E. Glenn; Corrigall, Kathleen A.; Ladinig, Olivia; Huron, David
2012-01-01
Theories of esthetic appreciation propose that (1) a stimulus is liked because it is expected or familiar, (2) a stimulus is liked most when it is neither too familiar nor too novel, or (3) a novel stimulus is liked because it elicits an intensified emotional response. We tested the third hypothesis by examining liking for music as a function of whether the emotion it expressed contrasted with the emotion expressed by music heard previously. Stimuli were 30-s happy- or sad-sounding excerpts from recordings of classical piano music. On each trial, listeners heard a different excerpt and made liking and emotion-intensity ratings. The emotional character of consecutive excerpts was repeated with varying frequencies, followed by an excerpt that expressed a contrasting emotion. As the number of presentations of the background emotion increased, liking and intensity ratings became lower compared to those for the contrasting emotion. Consequently, when the emotional character of the music was relatively novel, listeners’ responses intensified and their appreciation increased. PMID:23269918
Does Music Positively Impact Preterm Infant Outcomes?
OʼToole, Alexa; Francis, Kim; Pugsley, Lori
2017-06-01
The hospital environment leaves preterm infants (PTIs) exposed to various stressors that can disrupt their growth and development. Developmental interventions such as music may be an important strategy to mitigate PTI's stress. This brief evaluates current evidence regarding the impact of music therapy on outcomes for PTIs. The question guiding this brief is "Do various types of music therapy positively affect physiologic indicators, feeding behaviors/length of stay (LOS) and pain management outcomes for PTIs?" CINAHL/MEDLINE Complete and PubMed databases were searched using keywords preterm infants, premature infants, preterm baby, premature baby, NICU baby, music, and music therapy. The search was limited to 5 years for English studies evaluating the effects of music therapy on physiological indicators, feeding, pain outcomes, and length of stay. The search yielded 12 studies addressing these concerns. Music therapy was shown to positively affect physiologic indicators, feeding, length of stay, and pain outcomes for PTIs. In addition, music decreased parental stress. Thoughtful consideration should be given regarding the value of diverse types of music and parental involvement when incorporating music into an individualized plan of care. Furthermore, the development of guidelines with a focus on ambient sound reduction is an important strategy when adding music as an intervention. Further research is needed to investigate ambient sound levels in conjunction with musical interventions. In addition, the impact of various types of music, differences in gender, reduction of stress, pain for infants, and parental role in music requires further evaluation.
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.
Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P
2005-05-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex
Moucha, Raluca; Pandya, Pritesh K.; Engineer, Navzer D.; Rathbun, Daniel L.
2010-01-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8–4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity PMID:15616812
Evaluating musical instruments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, D. Murray
Scientific measurements of sound generation and radiation by musical instruments are surprisingly hard to correlate with the subtle and complex judgments of instrumental quality made by expert musicians.
O'Callaghan, Clare C; McDermott, Fiona; Hudson, Peter; Zalcberg, John R
2013-02-01
This study examines music's relevance, including preloss music therapy, for 8 informal caregivers of people who died from cancer. The design was informed by constructivist grounded theory and included semistructured interviews. Bereaved caregivers were supported or occasionally challenged as their musical lives enabled a connection with the deceased. Music was often still used to improve mood and sometimes used to confront grief. Specific music, however, was sometimes avoided to minimize sadness. Continuing bonds theory's focus on connecting with the deceased through memory and imagery engagement may expand to encompass musical memories, reworking the meaning of familiar music, and discovering new music related to the deceased. Preloss music involvement, including music therapy, between dying patients and families can help in bereavement.
ERIC Educational Resources Information Center
Anyanwu, Emeka G.
2015-01-01
Notable challenges, such as mental distress, boredom, negative moods, and attitudes, have been associated with learning in the cadaver dissection laboratory (CDL). The ability of background music (BM) to enhance the cognitive abilities of students is well documented. The present study was designed to investigate the impact of BM in the CDL and on…
Neurobiology of Everyday Communication: What Have We Learned From Music?
Kraus, Nina; White-Schwoch, Travis
2016-06-09
Sound is an invisible but powerful force that is central to everyday life. Studies in the neurobiology of everyday communication seek to elucidate the neural mechanisms underlying sound processing, their stability, their plasticity, and their links to language abilities and disabilities. This sound processing lies at the nexus of cognitive, sensorimotor, and reward networks. Music provides a powerful experimental model to understand these biological foundations of communication, especially with regard to auditory learning. We review studies of music training that employ a biological approach to reveal the integrity of sound processing in the brain, the bearing these mechanisms have on everyday communication, and how these processes are shaped by experience. Together, these experiments illustrate that music works in synergistic partnerships with language skills and the ability to make sense of speech in complex, everyday listening environments. The active, repeated engagement with sound demanded by music making augments the neural processing of speech, eventually cascading to listening and language. This generalization from music to everyday communications illustrates both that these auditory brain mechanisms have a profound potential for plasticity and that sound processing is biologically intertwined with listening and language skills. A new wave of studies has pushed neuroscience beyond the traditional laboratory by revealing the effects of community music training in underserved populations. These community-based studies reinforce laboratory work highlight how the auditory system achieves a remarkable balance between stability and flexibility in processing speech. Moreover, these community studies have the potential to inform health care, education, and social policy by lending a neurobiological perspective to their efficacy. © The Author(s) 2016.
Wu, Dan; Li, Chao-Yi; Yao, De-Zhong
2009-01-01
Background There is growing interest in the relation between the brain and music. The appealing similarity between brainwaves and the rhythms of music has motivated many scientists to seek a connection between them. A variety of transferring rules has been utilized to convert the brainwaves into music; and most of them are mainly based on spectra feature of EEG. Methodology/Principal Findings In this study, audibly recognizable scale-free music was deduced from individual Electroencephalogram (EEG) waveforms. The translation rules include the direct mapping from the period of an EEG waveform to the duration of a note, the logarithmic mapping of the change of average power of EEG to music intensity according to the Fechner's law, and a scale-free based mapping from the amplitude of EEG to music pitch according to the power law. To show the actual effect, we applied the deduced sonification rules to EEG segments recorded during rapid-eye movement sleep (REM) and slow-wave sleep (SWS). The resulting music is vivid and different between the two mental states; the melody during REM sleep sounds fast and lively, whereas that in SWS sleep is slow and tranquil. 60 volunteers evaluated 25 music pieces, 10 from REM, 10 from SWS and 5 from white noise (WN), 74.3% experienced a happy emotion from REM and felt boring and drowsy when listening to SWS, and the average accuracy for all the music pieces identification is 86.8%(κ = 0.800, P<0.001). We also applied the method to the EEG data from eyes closed, eyes open and epileptic EEG, and the results showed these mental states can be identified by listeners. Conclusions/Significance The sonification rules may identify the mental states of the brain, which provide a real-time strategy for monitoring brain activities and are potentially useful to neurofeedback therapy. PMID:19526057
Beranek, Leo L; Nishihara, Noriko
2014-01-01
The Eyring/Sabine equations assume that in a large irregular room a sound wave travels in straight lines from one surface to another, that the surfaces have an average sound absorption coefficient αav, and that the mean-free-path between reflections is 4 V/Stot where V is the volume of the room and Stot is the total area of all of its surfaces. No account is taken of diffusivity of the surfaces. The 4 V/Stot relation was originally based on experimental determinations made by Knudsen (Architectural Acoustics, 1932, pp. 132-141). This paper sets out to test the 4 V/Stot relation experimentally for a wide variety of unoccupied concert and chamber music halls with seating capacities from 200 to 5000, using the measured sound strengths Gmid and reverberation times RT60,mid. Computer simulations of the sound fields for nine of these rooms (of varying shapes) were also made to determine the mean-free-paths by that method. The study shows that 4 V/Stot is an acceptable relation for mean-free-paths in the Sabine/Eyring equations except for halls of unusual shape. Also demonstrated is the proper method for calibrating the dodecahedral sound source used for measuring the sound strength G, i.e., the reverberation chamber method.
What do monkeys' music choices mean?
Lamont, Alexandra M
2005-08-01
McDermott and Hauser have recently shown that although monkeys show some types of preferences for sound, preferences for music are found only in humans. This suggests that music might be a relatively recent adaptation in human evolution. Here, I focus on the research methods used by McDermott and Hauser, and consider the findings in relation to infancy research and music psychology.
Cannabis Dampens the Effects of Music in Brain Regions Sensitive to Reward and Emotion
Pope, Rebecca A; Wall, Matthew B; Bisby, James A; Luijten, Maartje; Hindocha, Chandni; Mokrysz, Claire; Lawn, Will; Moss, Abigail; Bloomfield, Michael A P; Morgan, Celia J A; Nutt, David J; Curran, H Valerie
2018-01-01
Abstract Background Despite the current shift towards permissive cannabis policies, few studies have investigated the pleasurable effects users seek. Here, we investigate the effects of cannabis on listening to music, a rewarding activity that frequently occurs in the context of recreational cannabis use. We additionally tested how these effects are influenced by cannabidiol, which may offset cannabis-related harms. Methods Across 3 sessions, 16 cannabis users inhaled cannabis with cannabidiol, cannabis without cannabidiol, and placebo. We compared their response to music relative to control excerpts of scrambled sound during functional Magnetic Resonance Imaging within regions identified in a meta-analysis of music-evoked reward and emotion. All results were False Discovery Rate corrected (P<.05). Results Compared with placebo, cannabis without cannabidiol dampened response to music in bilateral auditory cortex (right: P=.005, left: P=.008), right hippocampus/parahippocampal gyrus (P=.025), right amygdala (P=.025), and right ventral striatum (P=.033). Across all sessions, the effects of music in this ventral striatal region correlated with pleasure ratings (P=.002) and increased functional connectivity with auditory cortex (right: P< .001, left: P< .001), supporting its involvement in music reward. Functional connectivity between right ventral striatum and auditory cortex was increased by cannabidiol (right: P=.003, left: P=.030), and cannabis with cannabidiol did not differ from placebo on any functional Magnetic Resonance Imaging measures. Both types of cannabis increased ratings of wanting to listen to music (P<.002) and enhanced sound perception (P<.001). Conclusions Cannabis dampens the effects of music in brain regions sensitive to reward and emotion. These effects were offset by a key cannabis constituent, cannabidol. PMID:29025134
Music and language perception: expectations, structural integration, and cognitive sequencing.
Tillmann, Barbara
2012-10-01
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes. Copyright © 2012 Cognitive Science Society, Inc.
Influence of the steady background turbulence level on second sound dynamics in He II II
NASA Astrophysics Data System (ADS)
Dalban-Canassy, M.; Hilton, D. K.; Sciver, S. W. Van
2007-01-01
We report complementary results to our previous publication [Dalban-Canassy M, Hilton DK, Van Sciver SW. Influence of the steady background turbulence level on second sound dynamics in He II. Adv Cryo Eng 2006;51:371-8], both of which are aimed at determining the influence of background turbulence on the breakpoint energy of second sound pulses in He II. The apparatus consists of a channel 175 mm long and 242 mm 2 in cross section immersed in a saturated bath of He II at 1.7 K. A heater at the bottom end generates both background turbulence, through a low level steady heat flux (up to qs = 2.6 kW/m 2), and high intensity square second sound pulses ( qp = 100 or 200 kW/m 2) of variable duration Δ t0 (up to 1 ms). Two superconducting filament sensors, located 25.4 mm and 127 mm above the heater, measure the temperature profiles of the traveling pulses. We present here an analysis of the measurements gathered on the top sensor, and compare them to similar results for the bottom sensor [1]. The strong dependence of the breakpoint energy on the background heat flux previously illustrated is also observed on the top sensor. The present work shows that the ratio of energy received at the top sensor to that at the bottom sensor diminishes with increasing background heat flux.
Music in film and animation: experimental semiotics applied to visual, sound and musical structures
NASA Astrophysics Data System (ADS)
Kendall, Roger A.
2010-02-01
The relationship of music to film has only recently received the attention of experimental psychologists and quantificational musicologists. This paper outlines theory, semiotical analysis, and experimental results using relations among variables of temporally organized visuals and music. 1. A comparison and contrast is developed among the ideas in semiotics and experimental research, including historical and recent developments. 2. Musicological Exploration: The resulting multidimensional structures of associative meanings, iconic meanings, and embodied meanings are applied to the analysis and interpretation of a range of film with music. 3. Experimental Verification: A series of experiments testing the perceptual fit of musical and visual patterns layered together in animations determined goodness of fit between all pattern combinations, results of which confirmed aspects of the theory. However, exceptions were found when the complexity of the stratified stimuli resulted in cognitive overload.
Leftward Lateralization of Auditory Cortex Underlies Holistic Sound Perception in Williams Syndrome
Bendszus, Martin; Schneider, Peter
2010-01-01
Background Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Methodology/Principal Findings Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. Conclusions/Significance There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties. PMID:20808792
Park, H K; Bradley, J S
2009-09-01
Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.
Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods
NASA Astrophysics Data System (ADS)
Lee, J. Robert
The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.
The sound of cooperation: Musical influences on cooperative behavior.
Kniffin, Kevin M; Yan, Jubo; Wansink, Brian; Schulze, William D
2017-03-01
Music as an environmental aspect of professional workplaces has been closely studied with respect to consumer behavior while sparse attention has been given to its relevance for employee behavior. In this article, we focus on the influence of music upon cooperative behavior within decision-making groups. Based on results from two extended 20-round public goods experiments, we find that happy music significantly and positively influences cooperative behavior. We also find a significant positive association between mood and cooperative behavior. Consequently, while our studies provide partial support for the relevance of affect in relation to cooperation within groups, we also show an independently important function of happy music that fits with a theory of synchronous and rhythmic activity as a social lubricant. More generally, our findings indicate that music and perhaps other atmospheric variables that are designed to prime consumer behavior might have comparably important effects for employees and consequently warrant closer investigation. Copyright © 2016 The Authors Journal of Organizational Behavior Published by John Wiley & Sons Ltd.
The role of physics in shaping music
NASA Astrophysics Data System (ADS)
Townsend, Peter
2015-07-01
Physics and technology have played a major role in shaping the development, performance, interpretation and composition of music for many centuries. From the twentieth century, electronics and communications have provided recording and broadcasting that gives access to worldwide music and performers of many musical genres. Early scientific influence came via improved or totally new instruments, plus larger and better concert halls. Instrument examples range from developments of violins or pianos to keyed and valved wood wind and brass that offer chromatic performance. New sounds appeared by inventions of totally new instruments, such as the saxophone or the Theremin, to all the modern electronic influence on keyboards and synthesisers. Electronic variants of guitars are effectively new instruments that have spawned totally original musical styles. All such advances have encouraged more virtuosic performance, larger halls, a wider range of audiences and a consequent demand and ability of composers to meet the new challenges. Despite this immense impact, the role of physics and technology over the last few centuries has mostly been ignored, although it was often greater than any links to arts or culture. Recorded and broadcast music has enhanced our expectations on performance and opened gateways to purely electronically generated sounds, of the now familiar electronic keyboards and synthesisers. This brief review traces some of the highlights in musical evolution that were enabled by physics and technology and their impact on the musical scene. The pattern from the past is clear, and so some of the probable advances in the very near future are also predicted. Many are significant as they will impinge on our appreciation of both current and past music, as well as compositional styles. Mention is made of the difference in sound between live and recorded music and the reasons why none of us ever have precisely the same musical experience twice, even from the same
Neurophysiological Effects of Trait Empathy in Music Listening
Wallmark, Zachary; Deblieck, Choi; Iacoboni, Marco
2018-01-01
The social cognitive basis of music processing has long been noted, and recent research has shown that trait empathy is linked to musical preferences and listening style. Does empathy modulate neural responses to musical sounds? We designed two functional magnetic resonance imaging (fMRI) experiments to address this question. In Experiment 1, subjects listened to brief isolated musical timbres while being scanned. In Experiment 2, subjects listened to excerpts of music in four conditions (familiar liked (FL)/disliked and unfamiliar liked (UL)/disliked). For both types of musical stimuli, emotional and cognitive forms of trait empathy modulated activity in sensorimotor and cognitive areas: in the first experiment, empathy was primarily correlated with activity in supplementary motor area (SMA), inferior frontal gyrus (IFG) and insula; in Experiment 2, empathy was mainly correlated with activity in prefrontal, temporo-parietal and reward areas. Taken together, these findings reveal the interactions between bottom-up and top-down mechanisms of empathy in response to musical sounds, in line with recent findings from other cognitive domains. PMID:29681804
Neurophysiological Effects of Trait Empathy in Music Listening.
Wallmark, Zachary; Deblieck, Choi; Iacoboni, Marco
2018-01-01
The social cognitive basis of music processing has long been noted, and recent research has shown that trait empathy is linked to musical preferences and listening style. Does empathy modulate neural responses to musical sounds? We designed two functional magnetic resonance imaging (fMRI) experiments to address this question. In Experiment 1, subjects listened to brief isolated musical timbres while being scanned. In Experiment 2, subjects listened to excerpts of music in four conditions (familiar liked (FL)/disliked and unfamiliar liked (UL)/disliked). For both types of musical stimuli, emotional and cognitive forms of trait empathy modulated activity in sensorimotor and cognitive areas: in the first experiment, empathy was primarily correlated with activity in supplementary motor area (SMA), inferior frontal gyrus (IFG) and insula; in Experiment 2, empathy was mainly correlated with activity in prefrontal, temporo-parietal and reward areas. Taken together, these findings reveal the interactions between bottom-up and top-down mechanisms of empathy in response to musical sounds, in line with recent findings from other cognitive domains.
Learning about the Dynamic Sun through Sounds
NASA Astrophysics Data System (ADS)
Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.
2008-06-01
Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829
NASA Astrophysics Data System (ADS)
Ramsey, Gordon P.
2015-10-01
The uniting of two seemingly disparate subjects in the classroom provides an interesting motivation for learning. Students are interested in how these subjects can possibly be integrated into related ideas. Such is the mixture of physics and music. Both are based upon mathematics, which becomes the interlocking theme. The connecting physical properties of sound and music are waves and harmonics. The introduction of instruments, including the voice, to the musical discussion allows the introduction of more advanced physical concepts such as energy, force, pressure, fluid dynamics, and properties of materials. Suggestions on how to teach physics concepts in the context of music at many levels are presented in this paper.
The Moot Audition: Preparing Music Performers as Expert Listeners
ERIC Educational Resources Information Center
Mitchell, Helen; Benedict, Roger
2017-01-01
Listening is regarded as the most fundamental contact with music performers, but this is challenged by a growing body of evidence which suggests that sight is as important as sound in evaluating music performers. Music students learn traditional performance skills for the music profession, but do not learn to think critically about preparation and…
Reinforcing and discriminative stimulus properties of music in goldfish.
Shinozuka, Kazutaka; Ono, Haruka; Watanabe, Shigeru
2013-10-01
This paper investigated whether music has reinforcing and discriminative stimulus properties in goldfish. Experiment 1 examined the discriminative stimulus properties of music. The subjects were successfully trained to discriminate between two pieces of music--Toccata and Fugue in D minor (BWV 565) by J. S. Bach and The Rite of Spring by I. Stravinsky. Experiment 2 examined the reinforcing properties of sounds, including BWV 565 and The Rite of Spring. We developed an apparatus for measuring spontaneous sound preference in goldfish. Music or noise stimuli were presented depending on the subject's position in the aquarium, and the time spent in each area was measured. The results indicated that the goldfish did not show consistent preferences for music, although they showed significant avoidance of noise stimuli. These results suggest that music has discriminative but not reinforcing stimulus properties in goldfish. Copyright © 2013 Elsevier B.V. All rights reserved.
Emotion rendering in music: range and characteristic values of seven musical variables.
Bresin, Roberto; Friberg, Anders
2011-10-01
Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 × 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than
Music as Active Information Resource for Players in Video Games
ERIC Educational Resources Information Center
Nagorsnick, Marian; Martens, Alke
2015-01-01
In modern video games, music can come in different shapes: it can be developed on a very high compositional level, with sophisticated sound elements like in professional film music; it can be developed on a very coarse level, underlying special situations (like danger or attack); it can also be automatically generated by sound engines. However, in…
ERIC Educational Resources Information Center
Stephens, Pam
2007-01-01
In this article, the author explores the digital artwork of Brian Evans, a composer-artist who creates visualizations of sound. Through the years Evans' love for music and visual art led him to explore ways to work concurrently with image and sound. Digital technology proved to be such a means. Digital technology is based upon the transcription of…
The influence of musical experience on lateralisation of auditory processing.
Spajdel, Marián; Jariabková, Katarína; Riecanský, Igor
2007-11-01
The influence of musical experience on free-recall dichotic listening to environmental sounds, two-tone sequences, and consonant-vowel (CV) syllables was investigated. A total of 60 healthy right-handed participants were divided into two groups according to their active musical competence ("musicians" and "non-musicians"). In both groups, we found a left ear advantage (LEA) for nonverbal stimuli (environmental sounds and two-tone sequences) and a right ear advantage (REA) for CV syllables. Dichotic listening to environmental sounds was uninfluenced by musical experience. The total accuracy of recall for two-tone sequences was higher in musicians than in non-musicians but the lateralisation was similar in both groups. For CV syllables a lower REA was found in male but not female musicians in comparison to non-musicians. The results indicate a specific sex-dependent effect of musical experience on lateralisation of phonological auditory processing.
ERIC Educational Resources Information Center
Faronii-Butler, Kishasha O.
2013-01-01
This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…
Musical Expertise and the Ability to Imagine Loudness
Bishop, Laura; Bailes, Freya; Dean, Roger T.
2013-01-01
Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant’s imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability. PMID:23460791
Musical expertise and the ability to imagine loudness.
Bishop, Laura; Bailes, Freya; Dean, Roger T
2013-01-01
Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant's imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.
ERIC Educational Resources Information Center
DIVISION OF INSTRUCTION; DIVISION OF SPECIAL EDUCATION
A BROCHURE HAS BEEN PREPARED TO GIVE TEACHERS OF EXCEPTIONAL CHILDREN SOME INFORMATION ABOUT MANY INSTRUMENTS WHICH CAN BE USED IN THE CLASSROOM. IT IS NOTED THAT EXCEPTIONAL CHILDREN, IN COMMON WITH THEIR NORMAL CLASSMATES, HAVE A LOVE OF BEAUTIFUL MUSIC AND INTRIGUING SOUNDS. MANY OF THEM HAVE SPECIFIC MUSICAL TALENTS, AND MOST OF THEM HAVE BEEN…
Tervaniemi, Mari; Sannemann, Christian; Noyranen, Maiju; Salonen, Johanna; Pihko, Elina
2011-08-01
The brain basis behind musical competence in its various forms is not yet known. To determine the pattern of hemispheric lateralization during sound-change discrimination, we recorded the magnetic counterpart of the electrical mismatch negativity (MMNm) responses in professional musicians, musical participants (with high scores in the musicality tests but without professional training in music) and non-musicians. While watching a silenced video, they were presented with short sounds with frequency and duration deviants and C major chords with C minor chords as deviants. MMNm to chord deviants was stronger in both musicians and musical participants than in non-musicians, particularly in their left hemisphere. No group differences were obtained in the MMNm strength in the right hemisphere in any of the conditions or in the left hemisphere in the case of frequency or duration deviants. Thus, in addition to professional training in music, musical aptitude (combined with lower-level musical training) is also reflected in brain functioning related to sound discrimination. The present magnetoencephalographic evidence therefore indicates that the sound discrimination abilities may be differentially distributed in the brain in musically competent and naïve participants, especially in a musical context established by chord stimuli: the higher forms of musical competence engage both auditory cortices in an integrative manner. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
The Relations between Astronomy and Music in Medieval Armenia
NASA Astrophysics Data System (ADS)
Vardumyan, Arpi
2015-07-01
In Middle Ages Astronomy and Music were included in the four sciences, together with Mathematics and Geometry. From ancient times philosophers thought that harmony lies in the basis of world creation. The Earth was in the centre of the Universe, and the seven planets went around it, the Sun and the Moon in their number. Harmony was also in the basis of music, with seven sounds due to seven planets. It was considered that owing to harmonic rotation cosmic universal music appears, and it is not attainable for human ear as it is used to it. Medieval connoisseurs of music therapy believed that for healing a person his astrological data must first be cleared out, in order to define in which musical mode should sound the melody in order to treat him/her. Comparing music with astrology they considered easier to practise the first one because the celestial luminaries are much higher and farther from people.
Rhythmic engagement with music in infancy
Zentner, Marcel; Eerola, Tuomas
2010-01-01
Humans have a unique ability to coordinate their motor movements to an external auditory stimulus, as in music-induced foot tapping or dancing. This behavior currently engages the attention of scholars across a number of disciplines. However, very little is known about its earliest manifestations. The aim of the current research was to examine whether preverbal infants engage in rhythmic behavior to music. To this end, we carried out two experiments in which we tested 120 infants (aged 5–24 months). Infants were exposed to various excerpts of musical and rhythmic stimuli, including isochronous drumbeats. Control stimuli consisted of adult- and infant-directed speech. Infants’ rhythmic movements were assessed by multiple methods involving manual coding from video excerpts and innovative 3D motion-capture technology. The results show that (i) infants engage in significantly more rhythmic movement to music and other rhythmically regular sounds than to speech; (ii) infants exhibit tempo flexibility to some extent (e.g., faster auditory tempo is associated with faster movement tempo); and (iii) the degree of rhythmic coordination with music is positively related to displays of positive affect. The findings are suggestive of a predisposition for rhythmic movement in response to music and other metrically regular sounds. PMID:20231438
Rhythmic engagement with music in infancy.
Zentner, Marcel; Eerola, Tuomas
2010-03-30
Humans have a unique ability to coordinate their motor movements to an external auditory stimulus, as in music-induced foot tapping or dancing. This behavior currently engages the attention of scholars across a number of disciplines. However, very little is known about its earliest manifestations. The aim of the current research was to examine whether preverbal infants engage in rhythmic behavior to music. To this end, we carried out two experiments in which we tested 120 infants (aged 5-24 months). Infants were exposed to various excerpts of musical and rhythmic stimuli, including isochronous drumbeats. Control stimuli consisted of adult- and infant-directed speech. Infants' rhythmic movements were assessed by multiple methods involving manual coding from video excerpts and innovative 3D motion-capture technology. The results show that (i) infants engage in significantly more rhythmic movement to music and other rhythmically regular sounds than to speech; (ii) infants exhibit tempo flexibility to some extent (e.g., faster auditory tempo is associated with faster movement tempo); and (iii) the degree of rhythmic coordination with music is positively related to displays of positive affect. The findings are suggestive of a predisposition for rhythmic movement in response to music and other metrically regular sounds.
Live interactive computer music performance practice
NASA Astrophysics Data System (ADS)
Wessel, David
2002-05-01
A live-performance musical instrument can be assembled around current lap-top computer technology. One adds a controller such as a keyboard or other gestural input device, a sound diffusion system, some form of connectivity processor(s) providing for audio I/O and gestural controller input, and reactive real-time native signal processing software. A system consisting of a hand gesture controller; software for gesture analysis and mapping, machine listening, composition, and sound synthesis; and a controllable radiation pattern loudspeaker are described. Interactivity begins in the set up wherein the speaker-room combination is tuned with an LMS procedure. This system was designed for improvisation. It is argued that software suitable for carrying out an improvised musical dialog with another performer poses special challenges. The processes underlying the generation of musical material must be very adaptable, capable of rapid changes in musical direction. Machine listening techniques are used to help the performer adapt to new contexts. Machine learning can play an important role in the development of such systems. In the end, as with any musical instrument, human skill is essential. Practice is required not only for the development of musically appropriate human motor programs but for the adaptation of the computer-based instrument as well.
Music and epilepsy: a critical review.
Maguire, Melissa Jane
2012-06-01
The effect of music on patients with epileptic seizures is complex and at present poorly understood. Clinical studies suggest that the processing of music within the human brain involves numerous cortical areas, extending beyond Heschl's gyrus and working within connected networks. These networks could be recruited during a seizure manifesting as musical phenomena. Similarly, if certain areas within the network are hyperexcitable, then there is a potential that particular sounds or certain music could act as epileptogenic triggers. This occurs in the case of musicogenic epilepsy, whereby seizures are triggered by music. Although it appears that this condition is rare, the exact prevalence is unknown, as often patients do not implicate music as an epileptogenic trigger and routine electroencephalography does not use sound in seizure provocation. Music therapy for refractory epilepsy remains controversial, and further research is needed to explore the potential anticonvulsant role of music. Dopaminergic system modulation and the ambivalent action of cognitive and sensory input in ictogenesis may provide possible theories for the dichotomous proconvulsant and anticonvulsant role of music in epilepsy. The effect of antiepileptic drugs and surgery on musicality should not be underestimated. Altered pitch perception in relation to carbamazepine is rare, but health care professionals should discuss this risk or consider alternative medication particularly if the patient is a professional musician or native-born Japanese. Studies observing the effect of epilepsy surgery on musicality suggest a risk with right temporal lobectomy, although the extent of this risk and correlation to size and area of resection need further delineation. This potential risk may bring into question whether tests on musical perception and memory should form part of the preoperative neuropsychological workup for patients embarking on surgery, particularly that of the right temporal lobe. Wiley
NASA Astrophysics Data System (ADS)
O'Donnell, Michael J.; Bisnovatyi, Ilia
2000-11-01
Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer
Music Students' Perceptions of Experiential Learning at the Moot Audition
ERIC Educational Resources Information Center
Mitchell, Helen F.
2018-01-01
The music industry is built on a system of expert evaluation focused on sound, but the foundations are challenged by recent research, which suggests that sight trumps sound. This presents a challenge to music educators, who train the next generation of expert performers and listeners. The aim of this study is to investigate students' perceptions…
Carbon soundings: greenhouse gas emissions of the UK music industry
NASA Astrophysics Data System (ADS)
Bottrill, C.; Liverman, D.; Boykoff, M.
2010-01-01
Over the past decade, questions regarding how to reduce human contributions to climate change have become more commonplace and non-nation state actors—such as businesses, non-government organizations, celebrities—have increasingly become involved in climate change mitigation and adaptation initiatives. For these dynamic and rapidly expanding spaces, this letter provides an accounting of the methods and findings from a 2007 assessment of greenhouse gas (GHG) emissions in the UK music industry. The study estimates that overall GHG emissions associated with the UK music market are approximately 540 000 t CO2e per annum. Music recording and publishing accounted for 26% of these emissions (138 000 t CO2e per annum), while three-quarters (74%) derived from activities associated with live music performances (400 000 t CO2e per annum). These results have prompted a group of music industry business leaders to design campaigns to reduce the GHG emissions of their supply chains. The study has also provided a basis for ongoing in-depth research on CD packaging, audience travel, and artist touring as well as the development of a voluntary accreditation scheme for reducing GHG emissions from activities of the UK music industry.
The developmental origins of musicality.
Trehub, Sandra E
2003-07-01
The study of musical abilities and activities in infancy has the potential to shed light on musical biases or dispositions that are rooted in nature rather than nurture. The available evidence indicates that infants are sensitive to a number of sound features that are fundamental to music across cultures. Their discrimination of pitch and timing differences and their perception of equivalence classes are similar, in many respects, to those of listeners who have had many years of exposure to music. Whether these perceptual skills are unique to human listeners is not known. What is unique is the intense human interest in music, which is evident from the early days of life. Also unique is the importance of music in social contexts. Current ideas about musical timing and interpersonal synchrony are considered here, along with proposals for future research.
Music Structure Analysis from Acoustic Signals
NASA Astrophysics Data System (ADS)
Dannenberg, Roger B.; Goto, Masataka
Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g., strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture.
3Mo: A Model for Music-Based Biofeedback
Maes, Pieter-Jan; Buhmann, Jeska; Leman, Marc
2016-01-01
In the domain of sports and motor rehabilitation, it is of major importance to regulate and control physiological processes and physical motion in most optimal ways. For that purpose, real-time auditory feedback of physiological and physical information based on sound signals, often termed “sonification,” has been proven particularly useful. However, the use of music in biofeedback systems has been much less explored. In the current article, we assert that the use of music, and musical principles, can have a major added value, on top of mere sound signals, to the benefit of psychological and physical optimization of sports and motor rehabilitation tasks. In this article, we present the 3Mo model to describe three main functions of music that contribute to these benefits. These functions relate the power of music to Motivate, and to Monitor and Modify physiological and physical processes. The model brings together concepts and theories related to human sensorimotor interaction with music, and specifies the underlying psychological and physiological principles. This 3Mo model is intended to provide a conceptual framework that guides future research on musical biofeedback systems in the domain of sports and motor rehabilitation. PMID:27994535
Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Huotilainen, Minna
2015-03-01
Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications. © 2015 New York Academy of Sciences.
Inside-in, alternative paradigms for sound spatialization
NASA Astrophysics Data System (ADS)
Bahn, Curtis; Moore, Stephan
2003-04-01
Arrays of widely spaced mono-directional loudspeakers (P.A.-style stereo configurations or ``outside-in'' surround-sound systems) have long provided the dominant paradigms for electronic sound diffusion. So prevalent are these models that alternatives have largely been ignored and electronic sound, regardless of musical aesthetic, has come to be inseparably associated with single-channel speakers, or headphones. We recognize the value of these familiar paradigms, but believe that electronic sound can and should have many alternative, idiosyncratic voices. Through the design and construction of unique sound diffusion structures, one can reinvent the nature of electronic sound; when allied with new sensor technologies, these structures offer alternative modes of interaction with techniques of sonic computation. This paper describes several recent applications of spherical speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays (SenSAs: combinations of various sensor devices with outward-radiating multi-channel speaker arrays). This presentation introduces the development of four generations of spherical speakers-over a hundred individual speakers of various configurations-and their use in many different musical situations including live performance, recording, and sound installation. We describe the design and construction of these systems, and, more generally, the new ``voices'' they give to electronic sound.
Mapping Phonetic Features for Voice-Driven Sound Synthesis
NASA Astrophysics Data System (ADS)
Janer, Jordi; Maestre, Esteban
In applications where the human voice controls the synthesis of musical instruments sounds, phonetics convey musical information that might be related to the sound of the imitated musical instrument. Our initial hypothesis is that phonetics are user- and instrument-dependent, but they remain constant for a single subject and instrument. We propose a user-adapted system, where mappings from voice features to synthesis parameters depend on how subjects sing musical articulations, i.e. note to note transitions. The system consists of two components. First, a voice signal segmentation module that automatically determines note-to-note transitions. Second, a classifier that determines the type of musical articulation for each transition based on a set of phonetic features. For validating our hypothesis, we run an experiment where subjects imitated real instrument recordings with their voice. Performance recordings consisted of short phrases of saxophone and violin performed in three grades of musical articulation labeled as: staccato, normal, legato. The results of a supervised training classifier (user-dependent) are compared to a classifier based on heuristic rules (user-independent). Finally, from the previous results we show how to control the articulation in a sample-concatenation synthesizer by selecting the most appropriate samples.
"A sound track of your life": music in contemporary UK funerals.
Adamson, Sue; Holloway, Margaret
2012-01-01
This article considers the role that music plays in contemporary UK funerals and the meaning that the funeral music has for bereaved families. It is based on findings from a recently completed study of 46 funerals funded by the UK Arts and Humanities Research Council. Music contributes to the public ceremony and the personal existential quest of the bereaved. It is important to both the content and process of the contemporary funeral, an event of deep cultural significance in our response as individuals and communities to death and the loss of a significant relationship. There is evidence that for many people, the music chosen and used also evokes and conveys their spirituality. Spirituality may not be intrinsic to the music but spiritual experience may result from the meaning that the music has for that particular person.
The basis of musical consonance as revealed by congenital amusia
Cousineau, Marion; McDermott, Josh H.; Peretz, Isabelle
2012-01-01
Some combinations of musical notes sound pleasing and are termed “consonant,” but others sound unpleasant and are termed “dissonant.” The distinction between consonance and dissonance plays a central role in Western music, and its origins have posed one of the oldest and most debated problems in perception. In modern times, dissonance has been widely believed to be the product of “beating”: interference between frequency components in the cochlea that has been believed to be more pronounced in dissonant than consonant sounds. However, harmonic frequency relations, a higher-order sound attribute closely related to pitch perception, has also been proposed to account for consonance. To tease apart theories of musical consonance, we tested sound preferences in individuals with congenital amusia, a neurogenetic disorder characterized by abnormal pitch perception. We assessed amusics’ preferences for musical chords as well as for the isolated acoustic properties of beating and harmonicity. In contrast to control subjects, amusic listeners showed no preference for consonance, rating the pleasantness of consonant chords no higher than that of dissonant chords. Amusics also failed to exhibit the normally observed preference for harmonic over inharmonic tones, nor could they discriminate such tones from each other. Despite these abnormalities, amusics exhibited normal preferences and discrimination for stimuli with and without beating. This dissociation indicates that, contrary to classic theories, beating is unlikely to underlie consonance. Our results instead suggest the need to integrate harmonicity as a foundation of music preferences, and illustrate how amusia may be used to investigate normal auditory function. PMID:23150582
Breaking the Sound Barrier with a Hummingbird's Index to Musical Themes.
ERIC Educational Resources Information Center
Bauer, Harry C.
1978-01-01
This review of Denys Parsons'"Directory of Tunes and Musical Themes" describes its simple but effective method of identifying musical compositions. Comparisons are made with other prominent musical reference works, particularly those of Harold Barlow and Sam Morgenstern. (JD)
Music and speech distractors disrupt sensorimotor synchronization: effects of musical training.
Białuńska, Anita; Dalla Bella, Simone
2017-12-01
Humans display a natural tendency to move to the beat of music, more than to the rhythm of any other auditory stimulus. We typically move with music, but rarely with speech. This proclivity is apparent early during development and can be further developed over the years via joint dancing, singing, or instrument playing. Synchronization of movement to the beat can thus improve with age, but also with musical experience. In a previous study, we found that music perturbed synchronization with a metronome more than speech fragments; music superiority disappeared when distractors shared isochrony and the same meter (Dalla Bella et al., PLoS One 8(8):e71945, 2013). Here, we examined if the interfering effect of music and speech distractors in a synchronization task is influenced by musical training. Musicians and non-musicians synchronized by producing finger force pulses to the sounds of a metronome while music and speech distractors were presented at one of various phase relationships with respect to the target. Distractors were familiar musical excerpts and fragments of children poetry comparable in terms of beat/stress isochrony. Music perturbed synchronization with the metronome more than speech did in both groups. However, the difference in synchronization error between music and speech distractors was smaller for musicians than for non-musicians, especially when the peak force of movement is reached. These findings point to a link between musical training and timing of sensorimotor synchronization when reacting to music and speech distractors.
NASA Astrophysics Data System (ADS)
Gurnett, Donald
2009-11-01
The popular concept of space is that it is a vacuum, with nothing of interest between the stars, planets, moons and other astronomical objects. In fact most of space is permeated by plasma, sometimes quite dense, as in the solar corona and planetary ionospheres, and sometimes quite tenuous, as is in planetary radiation belts. Even less well known is that these space plasmas support and produce an astonishing large variety of waves, the ``sounds of space.'' In this talk I will give you a tour of these space sounds, starting with the very early discovery of ``whistlers'' nearly a century ago, and proceeding through my nearly fifty years of research on space plasma waves using spacecraft-borne instrumentation. In addition to being of scientific interest, some of these sounds can even be described as ``musical,'' and have served as the basis for various musical compositions, including a production called ``Sun Rings,'' written by the well-known composer Terry Riley, that has been performed by the Kronos Quartet to audiences all around the world.
Presence of music while eating: Effects on energy intake, eating rate and appetite sensations.
Mamalaki, Eirini; Zachari, Konstantina; Karfopoulou, Eleni; Zervas, Efthimios; Yannakoulia, Mary
2017-01-01
The role of music in energy and dietary intake of humans is poorly understood. The purpose of the present laboratory study was to examine the effect of background music, its presence and its intensity, on energy intake, eating rate and appetite feelings. The study had a randomized crossover design. Twenty-six normal weight and overweight/obese men participated in random order in three trials: the control trial (no music was playing), the 60dB and the 90dB music trials, while an ad libitum lunch was consumed. Visual analogue scales for hunger, fullness/satiety, as well as desire to eat were administered to the participants. Energy intake at the ad libitum lunch did not differ between trials, even when covariates were taken into account. There were no statistically significant differences between trials on meal characteristics, such as meal duration, number of servings, number of bites eaten and on appetite indices. Future studies are needed to replicate these results and investigate the effect of different types of music and/or sound. Copyright © 2016 Elsevier Inc. All rights reserved.
CREATION OF MUSIC WITH FIBER REINFORCED CONCRETE
NASA Astrophysics Data System (ADS)
Kato, Hayato; Takeuchi, Masaki; Ogura, Naoyuki; Kitahara, Yukiko; Okamoto, Takahisa
This research focuses on the Fiber Reinforcement Concrete(FRC) and its performance on musical tones. Thepossibility of future musical instruments made of this concrete is discussed. Recently, the technical properties of FRC had been improved and the different production styles, such as unit weight of binding material and volume of fiber in the structure, hardly affects the results of the acoustics. However, the board thickness in the FRC instruments is directly related with the variety of musical tone. The FRC musical effects were compared with those produced with wood on wind instruments. The sounds were compared with those produced with woodwind instruments. The sound pressure level was affected by the material and it becomes remarkably notorious in the high frequency levels. These differences had great influence on the spectrum analysis of the tone in the wind instruments and the sensory test. The results from the sensory test show dominant performances of brightness, beauty and power in the FRC instruments compared with those made of wood.
Music therapy, emotions and the heart: a pilot study.
Raglio, Alfredo; Oasi, Osmano; Gianotti, Marta; Bellandi, Daniele; Manzoni, Veronica; Goulene, Karine; Imbriani, Chiara; Badiale, Marco Stramba
2012-01-01
The autonomic nervous system plays an important role in the control of cardiac function. It has been suggested that sound and music may have effects on the autonomic control of the heart inducing emotions, concomitantly with the activation of specific brain areas, i.e. the limbic area, and they may exert potential beneficial effects. This study is a prerequisite and defines a methodology to assess the relation between changes in cardiac physiological parameters such as heart rate, QT interval and their variability and the psychological responses to music therapy sessions. We assessed the cardiac physiological parameters and psychological responses to a music therapy session. ECG Holter recordings were performed before, during and after a music therapy session in 8 healthy individuals. The different behaviors of the music therapist and of the subjects have been analyzed with a specific music therapy assessment (Music Therapy Checklist). After the session mean heart rate decreased (p = 0.05), high frequency of heart rate variability tended to be higher and QTc variability tended to be lower. During music therapy session "affect attunements" have been found in all subjects but one. A significant emotional activation was associated to a higher dynamicity and variations of sound-music interactions. Our results may represent the rational basis for larger studies in diferent clinical conditions.
Human neuromagnetic steady-state responses to amplitude-modulated tones, speech, and music.
Lamminmäki, Satu; Parkkonen, Lauri; Hari, Riitta
2014-01-01
Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears' inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs. MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales. The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas
Milovanov, Riia; Huotilainen, Minna; Välimäki, Vesa; Esquef, Paulo A A; Tervaniemi, Mari
2008-02-15
The main focus of this study was to examine the relationship between musical aptitude and second language pronunciation skills. We investigated whether children with superior performance in foreign language production represent musical sound features more readily in the preattentive level of neural processing compared with children with less-advanced production skills. Sound processing accuracy was examined in elementary school children by means of event-related potential (ERP) recordings and behavioral measures. Children with good linguistic skills had better musical skills as measured by the Seashore musicality test than children with less accurate linguistic skills. The ERP data accompany the results of the behavioral tests: children with good linguistic skills showed more pronounced sound-change evoked activation with the music stimuli than children with less accurate linguistic skills. Taken together, the results imply that musical and linguistic skills could partly be based on shared neural mechanisms.
Effects of music listening on depressed women in Taiwan.
Lai, Y M
1999-01-01
This study investigated the physiological and psychological effects of music listening on depressed women in Taiwan. Through the use of a pretest-posttest, control group, experimental design, the heart rate, respiratory rate, blood pressure, and immediate mood states before and after a music/sound intervention were measured in 30 women. Quantitative data were analyzed descriptively and with t tests. A qualitative questionnaire was administered to participants to elicit information related to the subjective experience of music/sound listening. Significant posttest differences were found in experimental group participants' heart rates, respiratory rates, blood pressure, and tranquil mood states. Significant posttest differences also were found in control group participants' heart rates and tranquil mood states. The results support the use of music listening as a body-mind healing modality for depressed women.
Music and the mind: the magical power of sound.
Paulson, Steve; Bharucha, Jamshed; Iyer, Vijay; Limb, Charles; Tomaino, Concetta
2013-11-01
Music has been a wonderful tool to investigate the interconnection between brain science, psychology, and human experience. Moderated by Steve Paulson, executive producer and host of To the Best of Our Knowledge, cognitive neuroscientist and musician Jamshed Bharucha, music therapy pioneer Concetta Tomaino, jazz pianist Vijay Iyer, and physician musician Charles Limb discuss the neurological basis of creativity and aesthetic judgment and the capacity of music to elicit specific emotions and to heal the body. The following is an edited transcript of the discussion that occurred December 12, 2012, 7:00-8:15 PM, at the New York Academy of Sciences in New York City. © 2013 New York Academy of Sciences.
Music Influences Hedonic and Taste Ratings in Beer
Reinoso Carvalho, Felipe; Velasco, Carlos; van Ee, Raymond; Leboeuf, Yves; Spence, Charles
2016-01-01
The research presented here focuses on the influence of background music on the beer-tasting experience. An experiment is reported in which different groups of customers tasted a beer under three different conditions (N = 231). The control group was presented with an unlabeled beer, the second group with a labeled beer, and the third group with a labeled beer together with a customized sonic cue (a short clip from an existing song). In general, the beer-tasting experience was rated as more enjoyable with music than when the tasting was conducted in silence. In particular, those who were familiar with the band that had composed the song, liked the beer more after having tasted it while listening to the song, than those who knew the band, but only saw the label while tasting. These results support the idea that customized sound-tasting experiences can complement the process of developing novel beverage (and presumably also food) events. We suggest that involving musicians and researchers alongside brewers in the process of beer development, offers an interesting model for future development. Finally, we discuss the role of attention in sound-tasting experiences, and the importance that a positive hedonic reaction toward a song can have for the ensuing tasting experience. PMID:27199862
Improvisation and the Aural Tradition in Afro-American Music
ERIC Educational Resources Information Center
Brown, Marion
1973-01-01
From collective improvisation in Afro-American music arose liberated improvisations designed to exhibit personal virtuosity. But that was all that changed; spontaneity and personal sound remained the most interesting components of the music. (Author/RJ)
ERIC Educational Resources Information Center
Gillis, Leslie Myers
2013-01-01
The widespread popular music-based modern worship movement begun in the 1960's brought the styles and sounds of popular music into worship as churches sought to increase cultural connection in their worship. The worship transformation brought significant challenges. Church musicians trained in traditional skills had to adapt and incorporate skills…
Image/Music/Voice: Song Dubbing in Hollywood Musicals.
ERIC Educational Resources Information Center
Siefert, Marsha
1995-01-01
Uses the practice of song dubbing in the Hollywood film musical to explore the implications and consequences of the singing voice for imaging practices in the 1930s through 1960s. Discusses the ideological, technological, and socioeconomic basis for song dubbing. Discusses gender, race, and ethnicity patterns of image-sound practices. (SR)
Towards parameter-free classification of sound effects in movies
NASA Astrophysics Data System (ADS)
Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.
2005-08-01
The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.
The National Standards and Medieval Music in Middle School Choral and General Music.
ERIC Educational Resources Information Center
Hawkins, Patrick; Beegle, Amy
2003-01-01
Discusses how medieval music can be utilized in the choral and general music classroom to teach middle school students and to address the National Standards for Music Education. Provides background information on medieval music, ideas for lessons, and a glossary of key terms. (CMK)
Signal-to-background-ratio preferences of normal-hearing listeners as a function of music
NASA Astrophysics Data System (ADS)
Barrett, Jillian G.
2005-04-01
The primary purpose of speech is to convey a message. Many factors affect the listener's overall reception, several of which have little to do with the linguistic content itself, but rather with the delivery (e.g., prosody, intonation patterns, pragmatics, paralinguistic cues). Music, however, may convey a message either with or without linguistic content. In instances in which music has lyrics, one cannot assume verbal content will take precedence over sonic properties. Lyric emphasis over other aspects of music cannot be assumed. Singing introduces distortion of the vowel-consonant temporal ratio of speech, emphasizing vowels and de-emphasizing consonants. The phonemic production alterations of singing make it difficult for even those with normal hearing to understand the singer. This investigation was designed to identify singer-to-background-ratio (SBR) prefer- ences for normal hearing adult listeners (as opposed to SBR levels maxi-mizing speech discrimination ability). Stimuli were derived from three different original songs, each produced in two different genres and sung by six different singers. Singer and genre were the two primary contributors to significant differences in SBR preferences, though results clearly indicate genre, style and singer interact in different combinations for each song, each singer, and for each subject in an unpredictable manner.
NASA Astrophysics Data System (ADS)
Lim, Chen Kim; Tan, Kian Lam; Yusran, Hazwanni; Suppramaniam, Vicknesh
2017-10-01
Visual language or visual representation has been used in the past few years in order to express the knowledge in graphic. One of the important graphical elements is fractal and L-Systems is a mathematic-based grammatical model for modelling cell development and plant topology. From the plant model, L-Systems can be interpreted as music sound and score. In this paper, LSound which is a Visual Language Programming (VLP) framework has been developed to model plant to music sound and generate music score and vice versa. The objectives of this research has three folds: (i) To expand the grammar dictionary of L-Systems music based on visual programming, (ii) To design and produce a user-friendly and icon based visual language framework typically for L-Systems musical score generation which helps the basic learners in musical field and (iii) To generate music score from plant models and vice versa using L-Systems method. This research undergoes a four phases methodology where the plant is first modelled, then the music is interpreted, followed by the output of music sound through MIDI and finally score is generated. LSound is technically compared to other existing applications in the aspects of the capability of modelling the plant, rendering the music and generating the sound. It has been found that LSound is a flexible framework in which the plant can be easily altered through arrow-based programming and the music score can be altered through the music symbols and notes. This work encourages non-experts to understand L-Systems and music hand-in-hand.
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
The effect of maternal presence on premature infant response to recorded music.
Dearn, Trish; Shoemark, Helen
2014-01-01
To determine the effect of maternal presence on the physiological and behavioral status of the preterm infant when exposed to recorded music versus ambient sound. Repeated-measures randomized controlled trial. Special care nursery (SCN) in a tertiary perinatal center. Clinically stable preterm infants (22) born at > 28 weeks gestation and enrolled at > 32 weeks gestation and their mothers. Infants were exposed to lullaby music (6 minutes of ambient sound alternating with 2x 6 minutes recorded lullaby music) at a volume within the recommended sound level for the SCN. The mothers in the experimental group were present for the first 12 minutes (baseline and first music period) whereas the mothers in the control group were absent overall. There was no discernible infant response to music and therefore no significant impact of maternal presence on infant's response to music over time. However during the mothers' presence (first 12 minutes), the infants exhibited significantly higher oxygen saturation than during their absence p = .024) and less time spent in quiet sleep after their departure, though this was not significant. Infants may have been unable to detect the music against the ambient soundscape. Regardless of exposure to music, the infants' physiological and behavioral regulation were affected by the presence and departure of the mothers. © 2014 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.
NASA Astrophysics Data System (ADS)
Ideguchi, Tsuyoshi; Yoshida, Ryujyu; Ooshima, Keita
We examined how test subject impressions of music changed when artificial vibrations were incorporated as constituent elements of a musical composition. In this study, test subjects listened to several music samples in which different types of artificial vibration had been incorporated and then subjectively evaluated any resulting changes to their impressions of the music. The following results were obtained: i) Even if rhythm vibration is added to a silent component of a musical composition, it can effectively enhance musical fitness. This could be readily accomplished when actual sounds that had been synchronized with the vibration components were provided beforehand. ii) The music could be listened to more comfortably by adding not only a natural vibration extracted from percussion instruments but also artificial vibration as tactile stimulation according to intentional timing. Furthermore, it was found that the test subjects' impression of the music was affected by a characteristic of the artificial vibration. iii) Adding vibration to high-frequency areas can offer an effective and practical way of enhancing the appeal of a musical composition. iv) The movement sensations of sound and vibration could be experienced when the strength of the sound and vibration are modified in turn. These results suggest that the intentional application of artificial vibration could result in a sensitivity amplification factor on the part of a listener.
Perceptually Salient Regions of the Modulation Power Spectrum for Musical Instrument Identification.
Thoret, Etienne; Depalle, Philippe; McAdams, Stephen
2017-01-01
The ability of a listener to recognize sound sources, and in particular musical instruments from the sounds they produce, raises the question of determining the acoustical information used to achieve such a task. It is now well known that the shapes of the temporal and spectral envelopes are crucial to the recognition of a musical instrument. More recently, Modulation Power Spectra (MPS) have been shown to be a representation that potentially explains the perception of musical instrument sounds. Nevertheless, the question of which specific regions of this representation characterize a musical instrument is still open. An identification task was applied to two subsets of musical instruments: tuba, trombone, cello, saxophone, and clarinet on the one hand, and marimba, vibraphone, guitar, harp, and viola pizzicato on the other. The sounds were processed with filtered spectrotemporal modulations with 2D Gaussian windows. The most relevant regions of this representation for instrument identification were determined for each instrument and reveal the regions essential for their identification. The method used here is based on a "molecular approach," the so-called bubbles method. Globally, the instruments were correctly identified and the lower values of spectrotemporal modulations are the most important regions of the MPS for recognizing instruments. Interestingly, instruments that were confused with each other led to non-overlapping regions and were confused when they were filtered in the most salient region of the other instrument. These results suggest that musical instrument timbres are characterized by specific spectrotemporal modulations, information which could contribute to music information retrieval tasks such as automatic source recognition.
Sound-induced Interfacial Dynamics in a Microfluidic Two-phase Flow
NASA Astrophysics Data System (ADS)
Mak, Sze Yi; Shum, Ho Cheung
2014-11-01
Retrieving sound wave by a fluidic means is challenging due to the difficulty in visualizing the very minute sound-induced fluid motion. This work studies the interfacial response of multiphase systems towards fluctuation in the flow. We demonstrate a direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interface shows a passive response to sound of different frequencies with sufficiently precise time resolution, enabling the recording of musical notes and even subsequent reconstruction with high fidelity. This suggests that sensing and transmitting vibrations as tiny as those induced by sound could be realized in low interfacial tension systems. The robust control of the interfacial dynamics could be adopted for droplet and complex-fiber generation.
Musical Cognition at Birth: A Qualitative Study
ERIC Educational Resources Information Center
Hefer, Michal; Weintraub, Zalman; Cohen, Veronika
2009-01-01
This paper describes research on newborns' responses to music. Video observation and electroencephalogram (EEG) were collected to see whether newborns' responses to random sounds differed from their responses to music. The data collected were subjected to both qualitative and quantitative analysis. This paper will focus on the qualitative study,…
Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E
2011-04-29
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.
The music of earthquakes and Earthquake Quartet #1
Michael, Andrew J.
2013-01-01
Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.
Sound Exposure of Healthcare Professionals Working with a University Marching Band.
Russell, Jeffrey A; Yamaguchi, Moegi
2018-01-01
Music-induced hearing disorders are known to result from exposure to excessive levels of music of different genres. Marching band music, with its heavy emphasis on brass and percussion, is one type that is a likely contributor to music-induced hearing disorders, although specific data on sound pressure levels of marching bands have not been widely studied. Furthermore, if marching band music does lead to music-induced hearing disorders, the musicians may not be the only individuals at risk. Support personnel such as directors, equipment managers, and performing arts healthcare providers may also be exposed to potentially damaging sound pressures. Thus, we sought to explore to what degree healthcare providers receive sound dosages above recommended limits during their work with a marching band. The purpose of this study was to determine the sound exposure of healthcare professionals (specifically, athletic trainers [ATs]) who provide on-site care to a large, well-known university marching band. We hypothesized that sound pressure levels to which these individuals were exposed would exceed the National Institute for Occupational Safety and Health (NIOSH) daily percentage allowance. Descriptive observational study. Eight ATs working with a well-known American university marching band volunteered to wear noise dosimeters. During the marching band season, ATs wore an Etymotic ER-200D dosimeter whenever working with the band at outdoor rehearsals, indoor field house rehearsals, and outdoor performances. The dosimeters recorded dose percent exposure, equivalent continuous sound levels in A-weighted decibels, and duration of exposure. For comparison, a dosimeter also was worn by an AT working in the university's performing arts medicine clinic. Participants did not alter their typical duties during any data collection sessions. Sound data were collected with the dosimeters set at the NIOSH standards of 85 dBA threshold and 3 dBA exchange rate; the NIOSH 100% daily dose is
Meadows, Anthony; Burns, Debra S; Perkins, Susan M
2015-01-01
Previous research has demonstrated modest benefits from music-based interventions, specifically music and imagery interventions, during cancer care. However, little attention has been paid to measuring the benefits of music-based interventions using measurement instruments specifically designed to account for the multidimensional nature of music-imagery experiences. The purpose of this study was to describe the development of, and psychometrically evaluate, the Music Therapy Self-Rating Scale (MTSRS) as a measure for cancer patients engaged in supportive music and imagery interventions. An exploratory factor analysis using baseline data from 76 patients who consented to participate in a music-based intervention study during chemotherapy. Factor analysis of 14 items revealed four domains: Awareness of Body, Emotionally Focused, Personal Resources, and Treatment Specific. Internal reliability was excellent (Cronbach alphas ranging from 0.75 to 0.88) and construct and divergent-discriminant validity supported. The MTSRS is a psychometrically sound, brief instrument that captures essential elements of patient experience during music and imagery interventions. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Music, Television, and Video: Historical and Aesthetic Considerations.
ERIC Educational Resources Information Center
Burns, Gary; Thompson, Robert
Rock videos have their antecedents in film and television images, although music in films is usually background music. Television made possible the live transmission of musical numbers with visuals. The musical television commercial is an amalgam of conventions, with background music suddenly erupting into text, unheard by the characters but…
Digitizing Sound: How Can Sound Waves be Turned into Ones and Zeros?
NASA Astrophysics Data System (ADS)
Vick, Matthew
2010-10-01
From MP3 players to cell phones to computer games, we're surrounded by a constant stream of ones and zeros. Do we really need to know how this technology works? While nobody can understand everything, digital technology is increasingly making our lives a collection of "black boxes" that we can use but have no idea how they work. Pursuing scientific literacy should propel us to open up a few of these metaphorical boxes. High school physics offers opportunities to connect the curriculum to sports, art, music, and electricity, but it also offers connections to computers and digital music. Learning activities about digitizing sounds offer wonderful opportunities for technology integration and student problem solving. I used this series of lessons in high school physics after teaching about waves and sound but before optics and total internal reflection so that the concepts could be further extended when learning about fiber optics.
ERIC Educational Resources Information Center
John, Bina Ann; Cameron, Linda; Bartel, Lee
2016-01-01
Music is a distinct form of communication that manifests naturally when children are engaged in musical play regardless of their cultural backgrounds. In an ethnically diverse, urban community music school, where the majority of children represent non-western populations, the need for creativity-focused approaches that do not assume a western…
Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music.
Lee, Minyoung; Blake, Randolph; Kim, Sujin; Kim, Chai-Youn
2015-07-07
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
Benacchio, Simon; Mamou-Mani, Adrien; Chomette, Baptiste; Finel, Victor
2016-03-01
The vibrational behavior of musical instruments is usually studied using physical modeling and simulations. Recently, active control has proven its efficiency to experimentally modify the dynamical behavior of musical instruments. This approach could also be used as an experimental tool to systematically study fine physical phenomena. This paper proposes to use modal active control as an alternative to sound simulation to study the complex case of the coupling between classical guitar strings and soundboard. A comparison between modal active control and sound simulation investigates the advantages, the drawbacks, and the limits of these two approaches.
Shim, Hyunyong; Lee, Seungwan; Koo, Miseung; Kim, Jinsook
2018-02-26
To prevent noise induced hearing losses caused by listening to music with personal listening devices for young adults, this study was aimed to measure output levels of an MP3 and to identify preferred listening levels (PLLs) depending on earphone types, music genres, and listening durations. Twenty-two normal hearing young adults (mean=18.82, standard deviation=0.57) participated. Each participant was asked to select his or her most PLLs when listened to Korean ballade or dance music with an earbud or an over-the-ear earphone for 30 or 60 minutes. One side of earphone was connected to the participant's better ear and the other side was connected to a sound level meter via a 2 or 6 cc-couplers. Depending on earphone types, music genres, and listening durations, loudness A-weighted equivalent (LAeq) and loudness maximum time-weighted with A-frequency sound levels in dBA were measured. Neither main nor interaction effects of the PLLs among the three factors were significant. Overall output levels of earbuds were about 10-12 dBA greater than those of over-the-ear earphones. The PLLs were 1.73 dBA greater for earbuds than over-the-ear earphones. The average PLL for ballad was higher than for dance music. The PLLs at LAeq for both music genres were the greatest at 0.5 kHz followed by 1, 0.25, 2, 4, 0.125, 8 kHz in the order. The PLLs were not different significantly when listening to Korean ballad or dance music as functions of earphone types, music genres, and listening durations. However, over-the-ear earphones seemed to be more suitable to prevent noise induce hearing loss when listening to music, showing lower PLLs, possibly due to isolation from the background noise by covering ears.
Experimenting with string musical instruments
NASA Astrophysics Data System (ADS)
LoPresto, Michael C.
2012-03-01
What follows are several investigations involving string musical instruments developed for and used in a Science of Sound & Light course. The experiments make use of a guitar, orchestral string instruments and data collection and graphing software. They are designed to provide students with concrete examples of how mathematical formulae, when used in physics, represent reality that can actually be observed, in this case, the operation of string musical instruments.
ERIC Educational Resources Information Center
O'Callaghan, Clare C.; McDermott, Fiona; Hudson, Peter; Zalcberg, John R.
2013-01-01
This study examines music's relevance, including preloss music therapy, for 8 informal caregivers of people who died from cancer. The design was informed by constructivist grounded theory and included semistructured interviews. Bereaved caregivers were supported or occasionally challenged as their musical lives enabled a connection with the…
The Multisensory Sound Lab: Sounds You Can See and Feel.
ERIC Educational Resources Information Center
Lederman, Norman; Hendricks, Paula
1994-01-01
A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…
Navigating the Maze of Music Rights
ERIC Educational Resources Information Center
DuBoff, Leonard D.
2007-01-01
Music copyright is one of the most complex areas of intellectual property law. To begin with, there is a copyright in notated music and a copyright in accompanying lyrics. When the piece is performed, there is a copyright in the performance that is separate and apart from the copyright in the underlying work. If a sound recording is used in…
Ekström, Seth-Reino; Borg, Erik
2011-01-01
The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (P<.01). Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01) and SPN (P<.05). Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01), but there were smaller differences between masking conditions (P<.01). It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.
Language Experience Affects Grouping of Musical Instrument Sounds
ERIC Educational Resources Information Center
Bhatara, Anjali; Boll-Avetisyan, Natalie; Agus, Trevor; Höhle, Barbara; Nazzi, Thierry
2016-01-01
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of…
Incentive Mechanisms for Mobile Music Distribution
NASA Astrophysics Data System (ADS)
Furini, Marco; Montangero, Manuela
The mobile digital world is seen as an important business opportunity for two main reasons: the widespread usage of cellphones (more than two billions [30], most of them with sound features) and the pervasiveness of mobile technologies. As a result, music industry and telecoms are bringing the successful Internet-based music market strategy into the mobile scenario: record labels are setting up agreements with cellphone network providers (Sprint, Verizon, Vodafone, Orange just to name a few) to offer a download music service also in the mobile scenario. The strategy is to use wireless channels to distribute music contents in the attempt of replicating the success of the Internet-based download scenario.
ERIC Educational Resources Information Center
Cassidy, Gianna; MacDonald, Raymond A. R.
2007-01-01
The study investigated the effects of music with high arousal potential and negative affect (HA), music with low arousal potential and positive affect (LA), and everyday noise, on the cognitive task performance of introverts and extraverts. Forty participants completed five cognitive tasks: immediate recall, free recall, numerical and delayed…
Stanton, Caitlin R.; Chu, Alexandria; Collin, Jeff
2011-01-01
Background: Tobacco companies target young adults through marketing strategies that use bars and nightclubs to promote smoking. As restrictions increasingly limit promotions, music marketing has become an important vehicle for tobacco companies to shape brand image, generate brand recognition and promote tobacco. Methods: Analysis of previously secret tobacco industry documents from British American Tobacco, available at http://legacy.library.ucsf.edu. Results: In 1995, British American Tobacco (BAT) initiated a partnership with London’s Ministry of Sound (MOS) nightclub to promote Lucky Strike cigarettes to establish relevance and credibility among young adults in the UK. In 1997, BAT extended their MOS partnership to China and Taiwan to promote State Express 555. BAT sought to transfer values associated with the MOS lifestyle brand to its cigarettes. The BAT/MOS partnership illustrates the broad appeal of international brands across different regions of the world. Conclusion: Transnational tobacco companies like BAT are not only striving to stay contemporary with young adults through culturally relevant activities such as those provided by MOS but they are also looking to export their strategies to regions across the world. Partnerships like this BAT/MOS one skirt marketing restrictions recommended by the World Health Organization’s Framework Convention on Tobacco Control. The global scope and success of the MOS program emphasizes the challenge for national regulations to restrict such promotions. PMID:20159772
Oldies, Music Rights, and the Digital Age
ERIC Educational Resources Information Center
McDonald, Peter
2005-01-01
The author discusses the issue of copyright, oldies, and digital preservation. He examines efforts being made to create digital sound repositories for music record prior to 1970 at such places as Yale, Syracuse, the New York Public Library, and the Library of Congress. These issues are explored by contrasting the music industry's concern for loss…
Language, music, syntax and the brain.
Patel, Aniruddh D
2003-07-01
The comparative study of music and language is drawing an increasing amount of research interest. Like language, music is a human universal involving perceptually discrete elements organized into hierarchically structured sequences. Music and language can thus serve as foils for each other in the study of brain mechanisms underlying complex sound processing, and comparative research can provide novel insights into the functional and neural architecture of both domains. This review focuses on syntax, using recent neuroimaging data and cognitive theory to propose a specific point of convergence between syntactic processing in language and music. This leads to testable predictions, including the prediction that that syntactic comprehension problems in Broca's aphasia are not selective to language but influence music perception as well.
Scanning silence: mental imagery of complex sounds.
Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz
2005-07-15
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
Palmiero, Massimiliano; Nori, Raffaella; Rogolino, Carmelo; D'amico, Simonetta; Piccardi, Laura
2016-08-01
Sex differences in visuospatial abilities are long debated. Men generally outperform women, especially in wayfinding or learning a route or a sequence of places. These differences might depend on women's disadvantage in underlying spatial competences, such as mental rotation, and on the strategies used, as well as on emotions and on self-belief about navigational skills, not related to actual skill-levels. In the present study, sex differences in visuospatial and navigational working memory in emotional contexts were investigated. Participants' mood was manipulated by background music (positive, negative or neutral) while performing on the Corsi Block-tapping Task (CBT) and Walking Corsi (WalCT) test. In order to assess the effectiveness of mood manipulation, participants filled in the Positive and Negative Affect Schedule before and after carrying out the visuospatial tasks. Firstly, results showed that after mood induction, only the positive affect changed, whereas the negative affect remained unconfounded by mood and by sex. This finding is in line with the main effect of 'group' on all tests used: the positive music group scored significantly higher than other groups. Secondly, although men outperformed women in the CBT forward condition and in the WalCT forward and backward conditions, they scored higher than women only in the WalCT with the negative background music. This means that mood cannot fully explain sex differences in visuospatial and navigational working memory. Our results suggest that sex differences in the CBT and WalCT can be better explained by differences in spatial competences rather than by emotional contexts.
Musical Creativity in Slovenian Elementary Schools
ERIC Educational Resources Information Center
Rozman, Janja Crcinovic
2009-01-01
Background: The Slovenian music education curriculum for the first years of elementary school emphasises the following musical activities in the classroom: singing, playing instruments, listening to music, movement to music and musical creativity. In the field of musical creativity, there are two activities where students can be original and…
Performing Theory: Playing in the Music Therapy Discourse.
Kenny, Carolyn
2015-01-01
Performative writing is an art form that seeks to enliven our discourse by including the senses as a primary source of information processing. Through performative writing, one is seduced into engaging with the aesthetic. My art is music. My craft is Music Therapy. My theme is performing theory. Listen to the sound and silence of words, phrases, punctuation, syllables, format. My muses? I thank D. Soyini Madison, Ron Pelias, Philip Glass, Elliot Eisner, and Tom Barone for inspiration, and my teachers/Indigenous Elders and knowledge keepers who embraced the long tradition of oral transmission of knowledge and the healing power of sound. Stay, stay in the presence of the aesthetic. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities.
Larsson, Matz
2014-01-01
It has been suggested that the basic building blocks of music mimic sounds of moving humans, and because the brain was primed to exploit such sounds, they eventually became incorporated in human culture. However, that raises further questions. Why do genetically close, culturally well-developed apes lack musical abilities? Did our switch to bipedalism influence the origins of music? Four hypotheses are raised: (1) Human locomotion and ventilation can mask critical sounds in the environment. (2) Synchronization of locomotion reduces that problem. (3) Predictable sounds of locomotion may stimulate the evolution of synchronized behavior. (4) Bipedal gait and the associated sounds of locomotion influenced the evolution of human rhythmic abilities. Theoretical models and research data suggest that noise of locomotion and ventilation may mask critical auditory information. People often synchronize steps subconsciously. Human locomotion is likely to produce more predictable sounds than those of non-human primates. Predictable locomotion sounds may have improved our capacity of entrainment to external rhythms and to feel the beat in music. A sense of rhythm could aid the brain in distinguishing among sounds arising from discrete sources and also help individuals to synchronize their movements with one another. Synchronization of group movement may improve perception by providing periods of relative silence and by facilitating auditory processing. The adaptive value of such skills to early ancestors may have been keener detection of prey or stalkers and enhanced communication. Bipedal walking may have influenced the development of entrainment in humans and thereby the evolution of rhythmic abilities.
Affective priming effects of musical sounds on the processing of word meaning.
Steinbeis, Nikolaus; Koelsch, Stefan
2011-03-01
Recent studies have shown that music is capable of conveying semantically meaningful concepts. Several questions have subsequently arisen particularly with regard to the precise mechanisms underlying the communication of musical meaning as well as the role of specific musical features. The present article reports three studies investigating the role of affect expressed by various musical features in priming subsequent word processing at the semantic level. By means of an affective priming paradigm, it was shown that both musically trained and untrained participants evaluated emotional words congruous to the affect expressed by a preceding chord faster than words incongruous to the preceding chord. This behavioral effect was accompanied by an N400, an ERP typically linked with semantic processing, which was specifically modulated by the (mis)match between the prime and the target. This finding was shown for the musical parameter of consonance/dissonance (Experiment 1) and then extended to mode (major/minor) (Experiment 2) and timbre (Experiment 3). Seeing that the N400 is taken to reflect the processing of meaning, the present findings suggest that the emotional expression of single musical features is understood by listeners as such and is probably processed on a level akin to other affective communications (i.e., prosody or vocalizations) because it interferes with subsequent semantic processing. There were no group differences, suggesting that musical expertise does not have an influence on the processing of emotional expression in music and its semantic connotations.
Uğraş, Gülay Altun; Yıldırım, Güven; Yüksel, Serpil; Öztürkçü, Yusuf; Kuzdere, Mustafa; Öztekin, Seher Deniz
2018-05-01
The purpose of this study was to determine effect of three different types of music on patients' preoperative anxiety. This randomized controlled trial included 180 patients who were randomly divided into four groups. While the control group didn't listen to music, the experimental groups respectively listened to natural sounds, Classical Turkish or Western Music for 30 min. The State Anxiety Inventory (STAI-S), systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR) and cortisol levels were checked. The post-music STAI-S, SBP, DBP, HR and cortisol levels of the patients in music groups were significantly lower than pre-music time. All types of music decreased STAI-S, SBP, and cortisol levels; additionally natural sounds reduced DBP; Classical Turkish Music also decreased DBP, and HR. All types of music had an effect on reducing patients' preoperative anxiety, and listening to Classical Turkish Music was particularly the most effective one. Copyright © 2018 Elsevier Ltd. All rights reserved.
Understanding music with cochlear implants
Bruns, Lisa; Mürbe, Dirk; Hahne, Anja
2016-01-01
Direct stimulation of the auditory nerve via a Cochlear Implant (CI) enables profoundly hearing-impaired people to perceive sounds. Many CI users find language comprehension satisfactory, but music perception is generally considered difficult. However, music contains different dimensions which might be accessible in different ways. We aimed to highlight three main dimensions of music processing in CI users which rely on different processing mechanisms: (1) musical discrimination abilities, (2) access to meaning in music, and (3) subjective music appreciation. All three dimensions were investigated in two CI user groups (post- and prelingually deafened CI users, all implanted as adults) and a matched normal hearing control group. The meaning of music was studied by using event-related potentials (with the N400 component as marker) during a music-word priming task while music appreciation was gathered by a questionnaire. The results reveal a double dissociation between the three dimensions of music processing. Despite impaired discrimination abilities of both CI user groups compared to the control group, appreciation was reduced only in postlingual CI users. While musical meaning processing was restorable in postlingual CI users, as shown by a N400 effect, data of prelingual CI users lack the N400 effect and indicate previous dysfunctional concept building. PMID:27558546
Topographic EEG activations during timbre and pitch discrimination tasks using musical sounds.
Auzou, P; Eustache, F; Etevenon, P; Platel, H; Rioux, P; Lambert, J; Lechevalier, B; Zarifian, E; Baron, J C
1995-01-01
Successive auditory stimulation sequences were presented binaurally to 18 young normal volunteers. Five conditions were investigated: two reference tasks, assumed to involve passive listening to couples of musical sounds, and three discrimination tasks, one dealing with pitch, and two with timbre (either with or without the attack). A symmetrical montage of 16 EEG channels was recorded for each subject across the different conditions. Two quantitative parameters of EEG activity were compared among the different sequences within five distinct frequency bands. As compared to a rest (no stimulation) condition, both passive listening conditions led to changes in primary auditory cortex areas. Both discrimination tasks for pitch and timbre led to right hemisphere EEG changes, organized in two poles: an anterior one and a posterior one. After discussing the electrophysiological aspects of this work, these results are interpreted in terms of a network including the right temporal neocortex and the right frontal lobe to maintain the acoustical information in an auditory working memory necessary to carry out the discrimination task.
Kopacz, Malgorzata
2005-01-01
The purpose of this scientific study was to determine how personality traits, as classified by Cattell, influence preferences regarding musical elements. The subject group consisted of 145 students, male and female, chosen at random from different Polish universities. For the purpose of determining their personality traits the participants completed the 16PF Questionnaire (Cattell, Saunders, & Stice, 1957; Russel & Karol, 1993), in its Polish adaptation by Choynowski (Nowakowska, 1970). The participants' musical preferences were determined by their completing a Questionnaire of Musical Preferences (specifically created for the purposes of this research), in which respondents indicated their favorite piece of music. Next, on the basis of the Questionnaire of Musical Preferences, a list of the works of music chosen by the participants was compiled. All pieces were collected on CDs and analyzed to separate out their basic musical elements. The statistical analysis shows that some personality traits: Liveliness (Factor F), Social Boldness (Factor H), Vigilance (Factor L), Openness to Change (Factor Q1), Extraversion (a general factor) have an influence on preferences regarding musical elements. Important in the subjects' musical preferences were found to be those musical elements having stimulative value and the ability to regulate the need for stimulation. These are: tempo, rhythm in relation to metrical basis, number of melodic themes, sound voluminosity, and meter.
How do musical tonality and experience affect visual working memory?
Yang, Hua; Lu, Jing; Gong, Diankun; Yao, Dezhong
2016-01-20
The influence of music on the human brain has continued to attract increasing attention from neuroscientists and musicologists. Currently, tonal music is widely present in people's daily lives; however, atonal music has gradually become an important part of modern music. In this study, we conducted two experiments: the first one tested for differences in perception of distractibility between tonal music and atonal music. The second experiment tested how tonal music and atonal music affect visual working memory by comparing musicians and nonmusicians who were placed in contexts with background tonal music, atonal music, and silence. They were instructed to complete a delay matching memory task. The results show that musicians and nonmusicians have different evaluations of the distractibility of tonal music and atonal music, possibly indicating that long-term training may lead to a higher auditory perception threshold among musicians. For the working memory task, musicians reacted faster than nonmusicians in all background music cases, and musicians took more time to respond in the tonal background music condition than in the other conditions. Therefore, our results suggest that for a visual memory task, background tonal music may occupy more cognitive resources than atonal music or silence for musicians, leaving few resources left for the memory task. Moreover, the musicians outperformed the nonmusicians because of the higher sensitivity to background music, which also needs a further longitudinal study to be confirmed.
A Sound Education for All: Multicultural Issues in Music Education
ERIC Educational Resources Information Center
Johnson, Jr., Bob L.
2004-01-01
Establishing the legitimacy of the arts within the larger school curriculum is a defining issue in arts education. Within the context of this perennial challenge, this article examines two multicultural issues in music education: equal music education opportunity and the idiomatic hegemony of the Western classical tradition. Discussions of the…
Student's music exposure: Full-day personal dose measurements.
Washnik, Nilesh Jeevandas; Phillips, Susan L; Teglas, Sandra
2016-01-01
Previous studies have shown that collegiate level music students are exposed to potentially hazardous sound levels. Compared to professional musicians, collegiate level music students typically do not perform as frequently, but they are exposed to intense sounds during practice and rehearsal sessions. The purpose of the study was to determine the full-day exposure dose including individual practice and ensemble rehearsals for collegiate student musicians. Sixty-seven college students of classical music were recruited representing 17 primary instruments. Of these students, 57 completed 2 days of noise dose measurements using Cirrus doseBadge programed according to the National Institute for Occupational Safety and Health criterion. Sound exposure was measured for 2 days from morning to evening, ranging from 7 to 9 h. Twenty-eight out of 57 (49%) student musicians exceeded a 100% daily noise dose on at least 1 day of the two measurement days. Eleven student musicians (19%) exceeded 100% daily noise dose on both days. Fourteen students exceeded 100% dose during large ensemble rehearsals and eight students exceeded 100% dose during individual practice sessions. Approximately, half of the student musicians exceeded 100% noise dose on a typical college schedule. This finding indicates that a large proportion of collegiate student musicians are at risk of developing noise-induced hearing loss due to hazardous sound levels. Considering the current finding, there is a need to conduct hearing conservation programs in all music schools, and to educate student musicians about the use and importance of hearing protection devices for their hearing.
Student's music exposure: Full-day personal dose measurements
Washnik, Nilesh Jeevandas; Phillips, Susan L.; Teglas, Sandra
2016-01-01
Previous studies have shown that collegiate level music students are exposed to potentially hazardous sound levels. Compared to professional musicians, collegiate level music students typically do not perform as frequently, but they are exposed to intense sounds during practice and rehearsal sessions. The purpose of the study was to determine the full-day exposure dose including individual practice and ensemble rehearsals for collegiate student musicians. Sixty-seven college students of classical music were recruited representing 17 primary instruments. Of these students, 57 completed 2 days of noise dose measurements using Cirrus doseBadge programed according to the National Institute for Occupational Safety and Health criterion. Sound exposure was measured for 2 days from morning to evening, ranging from 7 to 9 h. Twenty-eight out of 57 (49%) student musicians exceeded a 100% daily noise dose on at least 1 day of the two measurement days. Eleven student musicians (19%) exceeded 100% daily noise dose on both days. Fourteen students exceeded 100% dose during large ensemble rehearsals and eight students exceeded 100% dose during individual practice sessions. Approximately, half of the student musicians exceeded 100% noise dose on a typical college schedule. This finding indicates that a large proportion of collegiate student musicians are at risk of developing noise-induced hearing loss due to hazardous sound levels. Considering the current finding, there is a need to conduct hearing conservation programs in all music schools, and to educate student musicians about the use and importance of hearing protection devices for their hearing. PMID:26960787
Subcortical processing of speech regularities underlies reading and music aptitude in children
2011-01-01
Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input
Effects of Asymmetric Cultural Experiences on the Auditory Pathway Evidence from Music
Wong, Patrick C. M.; Perrachione, Tyler K.; Margulis, Elizabeth Hellmuth
2009-01-01
Cultural experiences come in many different forms, such as immersion in a particular linguistic community, exposure to faces of people with different racial backgrounds, or repeated encounters with music of a particular tradition. In most circumstances, these cultural experiences are asymmetric, meaning one type of experience occurs more frequently than other types (e.g., a person raised in India will likely encounter the Indian todi scale more so than a Westerner). In this paper, we will discuss recent findings from our laboratories that reveal the impact of short- and long-term asymmetric musical experiences on how the nervous system responds to complex sounds. We will discuss experiments examining how musical experience may facilitate the learning of a tone language, how musicians develop neural circuitries that are sensitive to musical melodies played on their instrument of expertise, and how even everyday listeners who have little formal training are particularly sensitive to music of their own culture(s). An understanding of these cultural asymmetries is useful in formulating a more comprehensive model of auditory perceptual expertise that considers how experiences shape auditory skill levels. Such a model has the potential to aid in the development of rehabilitation programs for the efficacious treatment of neurologic impairments. PMID:19673772
Music-induced positive mood broadens the scope of auditory attention
Makkonen, Tommi; Eerola, Tuomas
2017-01-01
Abstract Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood. PMID:28460035
Music-induced positive mood broadens the scope of auditory attention.
Putkinen, Vesa; Makkonen, Tommi; Eerola, Tuomas
2017-07-01
Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood. © The Author (2017). Published by Oxford University Press.
The functions of music and their relationship to music preference in India and Germany.
Schäfer, Thomas; Tipandjan, Arun; Sedlmeier, Peter
2012-01-01
Is the use of music in everyday life a culturally universal phenomenon? And do the functions served by music contribute to the development of music preferences regardless of the listener's cultural background? The present study explored similarities and dissimilarities in the functions of music listening and their relationship to music preferences in two countries with different cultural backgrounds: India as an example of a collectivistic society and Germany as an example of an individualistic society. Respondents were asked to what degree their favorite music serves several functions in their life. The functions were summarized in seven main groups: background entertainment, prompt for memories, diversion, emotion regulation, self-regulation, self-reflection, and social bonding. Results indicate a strong similarity of the functions of people's favorite music for Indian and German listeners. Among the Indians, all of the seven functions were rated as meaningful; among the Germans, this was the case for all functions except emotion regulation. However, a pronounced dissimilarity was found in the predictive power of the functions of music for the strength of music preference, which was much stronger for Germans than for Indians. In India, the functions of music most predictive for music preference were diversion, self-reflection, and social bonding. In Germany, the most predictive functions were emotion regulation, diversion, self-reflection, prompt for memories, and social bonding. It is concluded that potential cultural differences hardly apply to the functional use of music in everyday life, but they do so with respect to the impact of the functions on the development of music preference. The present results are consistent with the assumption that members of a collectivistic society tend to set a higher value on their social and societal integration and their connectedness to each other than do members of individualistic societies.
Lima, César F; Garrett, Carolina; Castro, São Luís
2013-01-01
Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
A little bit less would be great: adolescents' opinion towards music levels.
Gilles, Annick; Thuy, Inge; De Rycke, Els; Van de Heyning, Paul
2014-01-01
Many music organizations are opposed to restrictive noise regulations, because of anxiety related to the possibility of a decrease in the number of adolescents attending music events. The present study consists of two research parts evaluating on one hand the youth's attitudes toward the sound levels at indoor as well as outdoor musical activities and on the other hand the effect of more strict noise regulations on the party behavior of adolescents and young adults. In the first research part, an interview was conducted during a music event at a youth club. A total of 41 young adults were questioned concerning their opinion toward the intensity levels of the music twice: Once when the sound level was 98 dB(A), LAeq, 60min and once when the sound level was increased up to 103 dB(A), LAeq, 60min . Some additional questions concerning hearing protection (HP) use and attitudes toward more strict noise regulations were asked. In the second research part, an extended version of the questionnaire, with addition of some questions concerning the reasons for using/not using HP at music events, was published online and completed by 749 young adults. During the interview, 51% considered a level of 103 dB(A), LAeq, 60min too loud compared with 12% during a level of 98 dB(A), LAeq, 60min . For the other questions, the answers were similar for both research parts. Current sound levels at music venues were often considered as too loud. More than 80% held a positive attitude toward more strict noise regulations and reported that they would not alter their party behavior when the sound levels would decrease. The main reasons given for the low use of HP were that adolescents forget to use them, consider them as uncomfortable and that they never even thought about using them. These results suggest that adolescents do not demand excessive noise levels and that more strict noise regulation would not influence party behavior of youngsters.
Temporal modulations in speech and music.
Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David
2017-10-01
Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Brain activation during anticipation of sound sequences.
Leaver, Amber M; Van Lare, Jennifer; Zielinski, Brandon; Halpern, Andrea R; Rauschecker, Josef P
2009-02-25
Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here, we study this predictive "anticipatory imagery" at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging. Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in "training" frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains.
"Sounds Good to Me": Canadian Children's Perceptions of Popular Music
ERIC Educational Resources Information Center
Bosacki, Sandra; Francis-Murray, Nancy; Pollon, Dawn E.; Elliott, Anne
2006-01-01
This cross-sectional study explored the role of age and socioeconomic status (SES) in relation to children's popular musical preferences. As part of a larger, multi-method, longitudinal study on children's and adolescents self-views and media preference, the present study investigated the popular music section of a self-report questionnaire. Data…
Epilepsy and music: practical notes.
Maguire, M
2017-04-01
Music processing occurs via a complex network of activity far beyond the auditory cortices. This network may become sensitised to music or may be recruited as part of a temporal lobe seizure, manifesting as either musicogenic epilepsy or ictal musical phenomena. The idea that sound waves may directly affect brain waves has led researchers to explore music as therapy for epilepsy. There is limited and low quality evidence of an antiepileptic effect with the Mozart Sonata K.448. We do not have a pathophysiological explanation for the apparent dichotomous effect of music on seizures. However, clinicians should consider musicality when treating patients with antiepileptic medication or preparing patients for epilepsy surgery. Carbamazepine and oxcarbazepine each may cause a reversible altered appreciation of pitch. Surgical cohort studies suggest that musical memory and perception may be affected, particularly following right temporal lobe surgery, and discussion of this risk should form part of presurgical counselling. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Learning from Looking at Sound: Using Multimedia Spectrograms to Explore World Music
ERIC Educational Resources Information Center
Thibeault, Matthew D.
2011-01-01
This article details the use of multimedia spectrogram displays for visualizing and understanding music. A section on foundational considerations presents similarities and differences between Western musical scores and spectrograms, in particular the benefit in avoiding Western notation when using music from a culture where representation through…
Neural processing of musical meter in musicians and non-musicians.
Zhao, T Christina; Lam, H T Gloria; Sohi, Harkirat; Kuhl, Patricia K
2017-11-01
Musical sounds, along with speech, are the most prominent sounds in our daily lives. They are highly dynamic, yet well structured in the temporal domain in a hierarchical manner. The temporal structures enhance the predictability of musical sounds. Western music provides an excellent example: while time intervals between musical notes are highly variable, underlying beats can be realized. The beat-level temporal structure provides a sense of regular pulses. Beats can be further organized into units, giving the percept of alternating strong and weak beats (i.e. metrical structure or meter). Examining neural processing at the meter level offers a unique opportunity to understand how the human brain extracts temporal patterns, predicts future stimuli and optimizes neural resources for processing. The present study addresses two important questions regarding meter processing, using the mismatch negativity (MMN) obtained with electroencephalography (EEG): 1) how tempo (fast vs. slow) and type of metrical structure (duple: two beats per unit vs. triple: three beats per unit) affect the neural processing of metrical structure in non-musically trained individuals, and 2) how early music training modulates the neural processing of metrical structure. Metrical structures were established by patterns of consecutive strong and weak tones (Standard) with occasional violations that disrupted and reset the structure (Deviant). Twenty non-musicians listened passively to these tones while their neural activities were recorded. MMN indexed the neural sensitivity to the meter violations. Results suggested that MMNs were larger for fast tempo and for triple meter conditions. Further, 20 musically trained individuals were tested using the same methods and the results were compared to the non-musicians. While tempo and meter type similarly influenced MMNs in both groups, musicians overall exhibited significantly reduced MMNs, compared to their non-musician counterparts. Further analyses
From everyday emotions to aesthetic emotions: towards a unified theory of musical emotions.
Juslin, Patrik N
2013-09-01
The sound of music may arouse profound emotions in listeners. But such experiences seem to involve a 'paradox', namely that music--an abstract form of art, which appears removed from our concerns in everyday life--can arouse emotions - biologically evolved reactions related to human survival. How are these (seemingly) non-commensurable phenomena linked together? Key is to understand the processes through which sounds are imbued with meaning. It can be argued that the survival of our ancient ancestors depended on their ability to detect patterns in sounds, derive meaning from them, and adjust their behavior accordingly. Such an ecological perspective on sound and emotion forms the basis of a recent multi-level framework that aims to explain emotional responses to music in terms of a large set of psychological mechanisms. The goal of this review is to offer an updated and expanded version of the framework that can explain both 'everyday emotions' and 'aesthetic emotions'. The revised framework--referred to as BRECVEMA--includes eight mechanisms: Brain Stem Reflex, Rhythmic Entrainment, Evaluative Conditioning, Contagion, Visual Imagery, Episodic Memory, Musical Expectancy, and Aesthetic Judgment. In this review, it is argued that all of the above mechanisms may be directed at information that occurs in a 'musical event' (i.e., a specific constellation of music, listener, and context). Of particular significance is the addition of a mechanism corresponding to aesthetic judgments of the music, to better account for typical 'appreciation emotions' such as admiration and awe. Relationships between aesthetic judgments and other mechanisms are reviewed based on the revised framework. It is suggested that the framework may contribute to a long-needed reconciliation between previous approaches that have conceptualized music listeners' responses in terms of either 'everyday emotions' or 'aesthetic emotions'. © 2013 Elsevier B.V. All rights reserved.
From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions
NASA Astrophysics Data System (ADS)
Juslin, Patrik N.
2013-09-01
The sound of music may arouse profound emotions in listeners. But such experiences seem to involve a ‘paradox’, namely that music - an abstract form of art, which appears removed from our concerns in everyday life - can arouse emotions - biologically evolved reactions related to human survival. How are these (seemingly) non-commensurable phenomena linked together? Key is to understand the processes through which sounds are imbued with meaning. It can be argued that the survival of our ancient ancestors depended on their ability to detect patterns in sounds, derive meaning from them, and adjust their behavior accordingly. Such an ecological perspective on sound and emotion forms the basis of a recent multi-level framework that aims to explain emotional responses to music in terms of a large set of psychological mechanisms. The goal of this review is to offer an updated and expanded version of the framework that can explain both ‘everyday emotions’ and ‘aesthetic emotions’. The revised framework - referred to as BRECVEMA - includes eight mechanisms: Brain Stem Reflex, Rhythmic Entrainment, Evaluative Conditioning, Contagion, Visual Imagery, Episodic Memory, Musical Expectancy, and Aesthetic Judgment. In this review, it is argued that all of the above mechanisms may be directed at information that occurs in a ‘musical event’ (i.e., a specific constellation of music, listener, and context). Of particular significance is the addition of a mechanism corresponding to aesthetic judgments of the music, to better account for typical ‘appreciation emotions’ such as admiration and awe. Relationships between aesthetic judgments and other mechanisms are reviewed based on the revised framework. It is suggested that the framework may contribute to a long-needed reconciliation between previous approaches that have conceptualized music listeners' responses in terms of either ‘everyday emotions’ or ‘aesthetic emotions’.
Benefits of listening to a recording of euphoric joint music making in polydrug abusers
Fritz, Thomas Hans; Vogt, Marius; Lederer, Annette; Schneider, Lydia; Fomicheva, Eira; Schneider, Martha; Villringer, Arno
2015-01-01
Background and Aims: Listening to music can have powerful physiological and therapeutic effects. Some essential features of the mental mechanism underlying beneficial effects of music are probably strong physiological and emotional associations with music created during the act of music making. Here we tested this hypothesis in a clinical population of polydrug abusers in rehabilitation listening to a previously performed act of physiologically and emotionally intense music making. Methods: Psychological effects of listening to self-made music that was created in a previous musical feedback intervention were assessed. In this procedure, participants produced music with exercise machines (Jymmin) which modulate musical sounds. Results: The data showed a positive effect of listening to the recording of joint music making on self-efficacy, mood, and a readiness to engage socially. Furthermore, the data showed the powerful influence of context on how the recording evoked psychological benefits. The effects of listening to the self-made music were only observable when participants listened to their own performance first; listening to a control music piece first caused effects to deteriorate. We observed a positive correlation between participants’ mood and their desire to engage in social activities with their former training partners after listening to the self-made music. This shows that the observed effects of listening to the recording of the single musical feedback intervention are influenced by participants recapitulating intense pleasant social interactions during the Jymmin intervention. Conclusions: Listening to music that was the outcome of a previous musical feedback (Jymmin) intervention has beneficial psychological and probably social effects in patients that had suffered from polydrug addiction, increasing self-efficacy, mood, and a readiness to engage socially. These intervention effects, however, depend on the context in which the music recordings are
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
NASA Astrophysics Data System (ADS)
Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert
2005-12-01
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Park, H K; Bradley, J S
2009-07-01
This paper reports the results of an evaluation of the merits of standard airborne sound insulation measures with respect to subjective ratings of the annoyance and loudness of transmitted sounds. Subjects listened to speech and music sounds modified to represent transmission through 20 different walls with sound transmission class (STC) ratings from 34 to 58. A number of variations in the standard measures were also considered. These included variations in the 8-dB rule for the maximum allowed deficiency in the STC measure as well as variations in the standard 32-dB total allowed deficiency. Several spectrum adaptation terms were considered in combination with weighted sound reduction index (R(w)) values as well as modifications to the range of included frequencies in the standard rating contour. A STC measure without an 8-dB rule and an R(w) rating with a new spectrum adaptation term were better predictors of annoyance and loudness ratings of speech sounds. R(w) ratings with one of two modified C(tr) spectrum adaptation terms were better predictors of annoyance and loudness ratings of transmitted music sounds. Although some measures were much better predictors of responses to one type of sound than were the standard STC and R(w) values, no measure was remarkably improved for predicting annoyance and loudness ratings of both music and speech sounds.
Musical, language, and reading abilities in early Portuguese readers
Zuk, Jennifer; Andrade, Paulo E.; Andrade, Olga V. C. A.; Gardiner, Martin; Gaab, Nadine
2013-01-01
Early language and reading abilities have been shown to correlate with a variety of musical skills and elements of music perception in children. It has also been shown that reading impaired children can show difficulties with music perception. However, it is still unclear to what extent different aspects of music perception are associated with language and reading abilities. Here we investigated the relationship between cognitive-linguistic abilities and a music discrimination task that preserves an ecologically valid musical experience. 43 Portuguese-speaking students from an elementary school in Brazil participated in this study. Children completed a comprehensive cognitive-linguistic battery of assessments. The music task was presented live in the music classroom, and children were asked to code sequences of four sounds on the guitar. Results show a strong relationship between performance on the music task and a number of linguistic variables. A principle component analysis of the cognitive-linguistic battery revealed that the strongest component (Prin1) accounted for 33% of the variance and Prin1 was significantly related to the music task. Highest loadings on Prin1 were found for reading measures such as Reading Speed and Reading Accuracy. Interestingly, 22 children recorded responses for more than four sounds within a trial on the music task, which was classified as Superfluous Responses (SR). SR was negatively correlated with a variety of linguistic variables and showed a negative correlation with Prin1. When analyzing children with and without SR separately, only children with SR showed a significant correlation between Prin1 and the music task. Our results provide implications for the use of an ecologically valid music-based screening tool for the early identification of reading disabilities in a classroom setting. PMID:23785339
Technology and Music Education in a Digitized, Disembodied, Posthuman World
ERIC Educational Resources Information Center
Thwaites, Trevor
2014-01-01
Digital forms of sound manipulation are eroding traditional methods of sound development and transmission, causing a disjuncture in the ontology of music. Sound, the ambient phenomenon, is becoming disrupted and decentred by the struggles between long established controls, beliefs and desires as well as controls from within technologized contexts.…
Emotional sounds and the brain: the neuro-affective foundations of musical appreciation.
Panksepp, Jaak; Bernatzky, Günther
2002-11-01
This article summarizes the potential role of evolved brain emotional systems in the mediation of music appreciation. A variety of examples of how music may promote behavioral change are summarized, including effects on memory, mood, brain activity as well as autonomic responses such as the experience of 'chills'. Studies on animals (e.g. young chicks) indicate that musical stimulation have measurable effects on their behaviors and brain chemistries, especially increased brain norepinephrine (NE) turnover. The evolutionary sources of musical sensitivity are discussed, as well as the potential medical-therapeutic implications of this knowledge.
Music enrichment programs improve the neural encoding of speech in at-risk children.
Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis
2014-09-03
Musicians are often reported to have enhanced neurophysiological functions, especially in the auditory system. Musical training is thought to improve nervous system function by focusing attention on meaningful acoustic cues, and these improvements in auditory processing cascade to language and cognitive skills. Correlational studies have reported musician enhancements in a variety of populations across the life span. In light of these reports, educators are considering the potential for co-curricular music programs to provide auditory-cognitive enrichment to children during critical developmental years. To date, however, no studies have evaluated biological changes following participation in existing, successful music education programs. We used a randomized control design to investigate whether community music participation induces a tangible change in auditory processing. The community music training was a longstanding and successful program that provides free music instruction to children from underserved backgrounds who stand at high risk for learning and social problems. Children who completed 2 years of music training had a stronger neurophysiological distinction of stop consonants, a neural mechanism linked to reading and language skills. One year of training was insufficient to elicit changes in nervous system function; beyond 1 year, however, greater amounts of instrumental music training were associated with larger gains in neural processing. We therefore provide the first direct evidence that community music programs enhance the neural processing of speech in at-risk children, suggesting that active and repeated engagement with sound changes neural function. Copyright © 2014 the authors 0270-6474/14/3411913-06$15.00/0.
Measuring effects of music, noise, and healing energy using a seed germination bioassay.
Creath, Katherine; Schwartz, Gary E
2004-02-01
To measure biologic effects of music, noise, and healing energy without human preferences or placebo effects using seed germination as an objective biomarker. A series of five experiments were performed utilizing okra and zucchini seeds germinated in acoustically shielded, thermally insulated, dark, humid growth chambers. Conditions compared were an untreated control, musical sound, pink noise, and healing energy. Healing energy was administered for 15-20 minutes every 12 hours with the intention that the treated seeds would germinate faster than the untreated seeds. The objective marker was the number of seeds sprouted out of groups of 25 seeds counted at 12-hour intervals over a 72-hour growing period. Temperature and relative humidity were monitored every 15 minutes inside the seed germination containers. A total of 14 trials were run testing a total of 4600 seeds. Musical sound had a highly statistically significant effect on the number of seeds sprouted compared to the untreated control over all five experiments for the main condition (p < 0.002) and over time (p < 0.000002). This effect was independent of temperature, seed type, position in room, specific petri dish, and person doing the scoring. Musical sound had a significant effect compared to noise and an untreated control as a function of time (p < 0.03) while there was no significant difference between seeds exposed to noise and an untreated control. Healing energy also had a significant effect compared to an untreated control (main condition, p < 0.0006) and over time (p < 0.0001) with a magnitude of effect comparable to that of musical sound. This study suggests that sound vibrations (music and noise) as well as biofields (bioelectromagnetic and healing intention) both directly affect living biologic systems, and that a seed germination bioassay has the sensitivity to enable detection of effects caused by various applied energetic conditions.
The Extramusical Effects of Music Lessons on Preschoolers
ERIC Educational Resources Information Center
deVries, Peter
2004-01-01
The aim of the present study was to investigate the extramusical effects of a music education program in one preschool classroom over a period of six weeks. The class had not previously been exposed to regular music lessons. Readily available teaching resources containing sound recordings were used. Analysis revealed six themes that addressed the…
Law, Lily N. C.; Zentner, Marcel
2012-01-01
A common approach for determining musical competence is to rely on information about individuals’ extent of musical training, but relying on musicianship status fails to identify musically untrained individuals with musical skill, as well as those who, despite extensive musical training, may not be as skilled. To counteract this limitation, we developed a new test battery (Profile of Music Perception Skills; PROMS) that measures perceptual musical skills across multiple domains: tonal (melody, pitch), qualitative (timbre, tuning), temporal (rhythm, rhythm-to-melody, accent, tempo), and dynamic (loudness). The PROMS has satisfactory psychometric properties for the composite score (internal consistency and test-retest r>.85) and fair to good coefficients for the individual subtests (.56 to.85). Convergent validity was established with the relevant dimensions of Gordon’s Advanced Measures of Music Audiation and Musical Aptitude Profile (melody, rhythm, tempo), the Musical Ear Test (rhythm), and sample instrumental sounds (timbre). Criterion validity was evidenced by consistently sizeable and significant relationships between test performance and external musical proficiency indicators in all three studies (.38 to.62, p<.05 to p<.01). An absence of correlations between test scores and a nonmusical auditory discrimination task supports the battery’s discriminant validity (−.05, ns). The interrelationships among the various subtests could be accounted for by two higher order factors, sequential and sensory music processing. A brief version of the full PROMS is introduced as a time-efficient approximation of the full version of the battery. PMID:23285071
Turn Off the Music! Music Impairs Visual Associative Memory Performance in Older Adults
Reaves, Sarah; Graham, Brittany; Grahn, Jessica; Rabannifard, Parissa; Duarte, Audrey
2016-01-01
Purpose of the Study: Whether we are explicitly listening to it or not, music is prevalent in our environment. Surprisingly, little is known about the effect of environmental music on concurrent cognitive functioning and whether young and older adults are differentially affected by music. Here, we investigated the impact of background music on a concurrent paired associate learning task in healthy young and older adults. Design and Methods: Young and older adults listened to music or to silence while simultaneously studying face–name pairs. Participants’ memory for the pairs was then tested while listening to either the same or different music. Participants also made subjective ratings about how distracting they found each song to be. Results: Despite the fact that all participants rated music as more distracting to their performance than silence, only older adults’ associative memory performance was impaired by music. These results are most consistent with the theory that older adults’ failure to inhibit processing of distracting task-irrelevant information, in this case background music, contributes to their memory impairments. Implications: These data have important practical implications for older adults’ ability to perform cognitively demanding tasks even in what many consider to be an unobtrusive environment. PMID:26035876
Kello, Christopher T; Bella, Simone Dalla; Médé, Butovens; Balasubramaniam, Ramesh
2017-10-01
Humans talk, sing and play music. Some species of birds and whales sing long and complex songs. All these behaviours and sounds exhibit hierarchical structure-syllables and notes are positioned within words and musical phrases, words and motives in sentences and musical phrases, and so on. We developed a new method to measure and compare hierarchical temporal structures in speech, song and music. The method identifies temporal events as peaks in the sound amplitude envelope, and quantifies event clustering across a range of timescales using Allan factor (AF) variance. AF variances were analysed and compared for over 200 different recordings from more than 16 different categories of signals, including recordings of speech in different contexts and languages, musical compositions and performances from different genres. Non-human vocalizations from two bird species and two types of marine mammals were also analysed for comparison. The resulting patterns of AF variance across timescales were distinct to each of four natural categories of complex sound: speech, popular music, classical music and complex animal vocalizations. Comparisons within and across categories indicated that nested clustering in longer timescales was more prominent when prosodic variation was greater, and when sounds came from interactions among individuals, including interactions between speakers, musicians, and even killer whales. Nested clustering also was more prominent for music compared with speech, and reflected beat structure for popular music and self-similarity across timescales for classical music. In summary, hierarchical temporal structures reflect the behavioural and social processes underlying complex vocalizations and musical performances. © 2017 The Author(s).
The Musical Self-Concept of Chinese Music Students.
Petersen, Suse; Camp, Marc-Antoine
2016-01-01
The relationship between self-concept and societal settings has been widely investigated in several Western and Asian countries, with respect to the academic self-concept in an educational environment. Although the musical self-concept is highly relevant to musical development and performance, there is a lack of research exploring how the musical self-concept evolves in different cultural settings and societies. In particular, there have been no enquiries yet in the Chinese music education environment. This study's goal was the characterization of musical self-concept types among music students at a University in Beijing, China. The Musical Self-Concept Inquiry-including ability, emotional, physical, cognitive, and social facets-was used to assess the students' musical self-concepts (N = 97). The data analysis led to three significantly distinct clusters and corresponding musical self-concept types. The types were especially distinct, in the students' perception of their musical ambitions and abilities; their movement, rhythm and dancing affinity; and the spiritual and social aspects of music. The professional aims and perspectives, and the aspects of the students' sociodemographic background also differed between the clusters. This study is one of the first research endeavors addressing musical self-concepts in China. The empirical identification of the self-concept types offers a basis for future research on the connections between education, the development of musical achievement, and the musical self-concept in societal settings with differing understandings of the self.
The Musical Self-Concept of Chinese Music Students
Petersen, Suse; Camp, Marc-Antoine
2016-01-01
The relationship between self-concept and societal settings has been widely investigated in several Western and Asian countries, with respect to the academic self-concept in an educational environment. Although the musical self-concept is highly relevant to musical development and performance, there is a lack of research exploring how the musical self-concept evolves in different cultural settings and societies. In particular, there have been no enquiries yet in the Chinese music education environment. This study’s goal was the characterization of musical self-concept types among music students at a University in Beijing, China. The Musical Self-Concept Inquiry—including ability, emotional, physical, cognitive, and social facets—was used to assess the students’ musical self-concepts (N = 97). The data analysis led to three significantly distinct clusters and corresponding musical self-concept types. The types were especially distinct, in the students’ perception of their musical ambitions and abilities; their movement, rhythm and dancing affinity; and the spiritual and social aspects of music. The professional aims and perspectives, and the aspects of the students’ sociodemographic background also differed between the clusters. This study is one of the first research endeavors addressing musical self-concepts in China. The empirical identification of the self-concept types offers a basis for future research on the connections between education, the development of musical achievement, and the musical self-concept in societal settings with differing understandings of the self. PMID:27303337
Dawson, William J
2014-06-01
Recent publications indicate that musical training has effects on non-musical activities, some of which are lifelong. This study reviews recent publications collected from the Performing Arts Medicine Association bibliography. Music training, whether instrumental or vocal, produces beneficial and long-lasting changes in brain anatomy and function. Anatomic changes occur in brain areas devoted to hearing, speech, hand movements, and coordination between both sides of the brain. Functional benefits include improved sound processing and motor skills, especially in the upper extremities. Training benefits extend beyond music skills, resulting in higher IQs and school grades, greater specialized sensory and auditory memory/recall, better language memory and processing, heightened bilateral hand motor functioning, and improved integration and synchronization of sensory and motor functions. These changes last long after music training ends and can minimize or prevent age-related loss of brain cells and some mental functions. Early institution of music training and prolonged duration of training both appear to contribute to these positive changes.
The Impact of a Funded Research Program on Music Education Policy
ERIC Educational Resources Information Center
Hodges, Donald A.; Luehrsen, Mary
2010-01-01
"Sounds of Learning: The Impact of Music Education" is a research program designed to allow researchers to examine the roles of music education in the lives of school-aged children to expand the understanding of music's role in a quality education. The NAMM Foundation, the sponsoring organization, has provided more than $1,000,000 to fund research…
Musical intervention enhances infants’ neural processing of temporal structure in music and speech
Zhao, T. Christina; Kuhl, Patricia K.
2016-01-01
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants’ neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants’ neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants’ neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants’ ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing. PMID:27114512
Musical intervention enhances infants' neural processing of temporal structure in music and speech.
Zhao, T Christina; Kuhl, Patricia K
2016-05-10
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants' neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants' neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants' neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants' ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing.
The Mismatch Negativity: An Indicator of Perception of Regularities in Music.
Yu, Xide; Liu, Tao; Gao, Dingguo
2015-01-01
This paper reviews music research using Mismatch Negativity (MMN). MMN is a deviation-specific component of auditory event-related potential (EPR), which detects a deviation between a sound and an internal representation (e.g., memory trace). Recent studies have expanded the notion and the paradigms of MMN to higher-order music processing such as those involving short melodies, harmony chord, and music syntax. In this vein, we firstly reviewed the evolution of MMN from sound to music and then mainly compared the differences of MMN features between musicians and nonmusicians, followed by the discussion of the potential roles of the training effect and the natural exposure in MMN. Since MMN can serve as an index of neural plasticity, it thus can be widely used in clinical and other applied areas, such as detecting music preference in newborns or assessing wholeness of central auditory system of hearing illness. Finally, we pointed out some open questions and further directions. Current music perception research using MMN has mainly focused on relatively low hierarchical structure of music perception. To fully understand the neural substrates underlying processing of regularities in music, it is important and beneficial to combine MMN with other experimental paradigms such as early right-anterior negativity (ERAN).
The Mismatch Negativity: An Indicator of Perception of Regularities in Music
Yu, Xide; Liu, Tao; Gao, Dingguo
2015-01-01
This paper reviews music research using Mismatch Negativity (MMN). MMN is a deviation-specific component of auditory event-related potential (EPR), which detects a deviation between a sound and an internal representation (e.g., memory trace). Recent studies have expanded the notion and the paradigms of MMN to higher-order music processing such as those involving short melodies, harmony chord, and music syntax. In this vein, we firstly reviewed the evolution of MMN from sound to music and then mainly compared the differences of MMN features between musicians and nonmusicians, followed by the discussion of the potential roles of the training effect and the natural exposure in MMN. Since MMN can serve as an index of neural plasticity, it thus can be widely used in clinical and other applied areas, such as detecting music preference in newborns or assessing wholeness of central auditory system of hearing illness. Finally, we pointed out some open questions and further directions. Current music perception research using MMN has mainly focused on relatively low hierarchical structure of music perception. To fully understand the neural substrates underlying processing of regularities in music, it is important and beneficial to combine MMN with other experimental paradigms such as early right-anterior negativity (ERAN). PMID:26504352
Music and language: relations and disconnections.
Kraus, Nina; Slater, Jessica
2015-01-01
Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.
Wavelets in music analysis and synthesis: timbre analysis and perspectives
NASA Astrophysics Data System (ADS)
Alves Faria, Regis R.; Ruschioni, Ruggero A.; Zuffo, Joao A.
1996-10-01
Music is a vital element in the process of comprehending the world where we live and interact with. Frequency it exerts a subtle but expressive influence over a society's evolution line. Analysis and synthesis of music and musical instruments has always been associated with forefront technologies available at each period of human history, and there is no surprise in witnessing now the use of digital technologies and sophisticated mathematical tools supporting its development. Fourier techniques have been employed for years as a tool to analyze timbres' spectral characteristics, and re-synthesize them from these extracted parameters. Recently many modern implementations, based on spectral modeling techniques, have been leading to the development of new generations of music synthesizers, capable of reproducing natural sounds with high fidelity, and producing novel timbres as well. Wavelets are a promising tool on the development of new generations of music synthesizers, counting on its advantages over the Fourier techniques in representing non-periodic and transient signals, with complex fine textures, as found in music. In this paper we propose and introduce the use of wavelets addressing its perspectives towards musical applications. The central idea is to investigate the capacities of wavelets in analyzing, extracting features and altering fine timbre components in a multiresolution time- scale, so as to produce high quality synthesized musical sounds.
A Science Superior to Music: Joseph Sauveur and the Estrangement between Music and Acoustics
NASA Astrophysics Data System (ADS)
Fix, Adam
2015-09-01
The scientific revolution saw a shift from the natural philosophy of music to the science of acoustics. Joseph Sauveur (1653-1716), an early pioneer in acoustics, determined that science as understood in the eighteenth century could not address the fundamental problems of music, particularly the problem of consonance. Building on Descartes, Mersenne, and Huygens especially, Sauveur drew a sharp divide between sound and music, recognizing the former as a physical phenomenon obeying mechanical and mathematical principles and the latter as an inescapably subjective and unquantifiable perception. While acoustics grew prominent in the Académie des sciences, music largely fell out of the scientific discourse, becoming primarily practiced art rather than natural philosophy. This study illuminates what was considered proper science at the dawn of the Enlightenment and why one particular branch of natural philosophy—music—did not make the cut.
Pitch-informed solo and accompaniment separation towards its use in music education applications
NASA Astrophysics Data System (ADS)
Cano, Estefanía; Schuller, Gerald; Dittmar, Christian
2014-12-01
We present a system for the automatic separation of solo instruments and music accompaniment in polyphonic music recordings. Our approach is based on a pitch detection front-end and a tone-based spectral estimation. We assess the plausibility of using sound separation technologies to create practice material in a music education context. To better understand the sound separation quality requirements in music education, a listening test was conducted to determine the most perceptually relevant signal distortions that need to be improved. Results from the listening test show that solo and accompaniment tracks pose different quality requirements and should be optimized differently. We propose and evaluate algorithm modifications to better understand their effects on objective perceptual quality measures. Finally, we outline possible ways of optimizing our separation approach to better suit the requirements of music education applications.
A study of sound balances for the hard of hearing
NASA Astrophysics Data System (ADS)
Mathers, C. D.
Over a period of years, complaints have been received from television viewers, especially those who are hard of hearing, that background sound (e.g., audience laughter, crowd noise, mood music) is often transmitted at too high a level with respect to speech, so that information essential to the understanding of the program is lost. To consider possible solutions to the problem, a working party was set up representing both broadcasters and organizations for the hard of hearing. At early meetings, it was resolved that a series of subjective tests should be carried out to determine what reduction of background levels would be needed to provide a significant improvement in the intelligibility of television speech for viewers with hearing difficulties. The preparation of test tapes and the analysis of results are given.
Music, Mind, and Morality: Arousing the Body Politic
ERIC Educational Resources Information Center
Alperson, Philip; Carroll, Noel
2008-01-01
In this article, the authors address the conversation that, given the recent developments in the philosophy of mind, especially in terms of its cognitive turn, one task for philosophers of music might be to begin to speculate about the properties of music and organized sound that enable them to perform their various moral and cultural roles. The…
Music Education Desire(ing): Language, Literacy, and Lieder
ERIC Educational Resources Information Center
Gould, Elizabeth
2009-01-01
Issues of desire in music education are integral and anathema to the profession. Constituted of and by desire, we bodily engage music emotionally and cognitively; yet references to the body are limited to how it may be better managed in order to produce more satisfactory (desired) sounds, thus disciplining desire as we focus on the content of…
The Diversity of African Musics: Zulu Kings, Xhosa Clicks, and Gumboot Dancing in South Africa
ERIC Educational Resources Information Center
Mason, Nicola F.
2014-01-01
Multicultural curricula that explore African musics often focus on the commonalities among its musical traditions. Exploring the diversity of individual African musical traditions provides a pathway to the multiplicity of sounds, cultures, beliefs, and uses inherent within African and all multicultural musics. Deeper insights into the diversity of…
ERIC Educational Resources Information Center
Sharp, Lanette
Developed specifically for classroom teachers with a limited background in music, oral music lessons are designed to be taught in short, daily instruction segments to help students gain the most from music and transfer that knowledge to other parts of the curriculum. The lessons, a master degree project, were developed to support the Utah music…
Moreno, Sylvain; Lee, Yunjo
2014-01-01
Immediate and lasting effects of music or second-language training were examined in early childhood using event-related potentials (ERPs). ERPs were recorded for French vowels and musical notes in a passive oddball paradigm in 36 four- to six-year-old children who received either French or music training. Following training, both groups showed enhanced late discriminative negativity (LDN) in their trained condition (music group–musical notes; French group–French vowels) and reduced LDN in the untrained condition. These changes reflect improved processing of relevant (trained) sounds, and an increased capacity to suppress irrelevant (untrained) sounds. After one year, training-induced brain changes persisted and new hemispheric changes appeared. Such results provide evidence for the lasting benefit of early intervention in young children. PMID:25346534
Meilán García, Juan José; Iodice, Rosario; Carro, Juan; Sánchez, José Antonio; Palmero, Francisco; Mateos, Ana María
2012-06-01
Autobiographic memory undergoes progressive deterioration during the evolution of Alzheimer's disease (AD). The aim of this study was to analyze mechanisms which facilitate recovery of autobiographic memories. We used a repeatedly employed mechanism, music, with the addition of an emotional factor. Autobiographic memory provoked by a variety of sounds (music which was happy, sad, lacking emotion, ambient noise in a coffee bar and no sound) was analyzed in a sample of 25 patients with AD. Emotional music, especially sad music for remote memories, was found to be the most effective kind for recall of autobiographic experiences. The factor evoking the memory is not the music itself, but rather the emotion associated with it, and is useful for semantic rather than episodic memory.
Learning about the dynamic Sun through sounds
NASA Astrophysics Data System (ADS)
Peticolas, L. M.; Quinn, M.; MacCallum, J.; Luhmann, J.
2007-12-01
Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We will present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the dynamic eruptions of mass from the outermost atmosphere of the Sun, the Corona. These eruptions are called coronal mass ejections (CMEs). One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it in the software to make music. We will demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We will discuss a "walk across the Sun" created for this exhibit so people can hear the features on solar images. For example, we will show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We will also share our successes and lessons learned. These two projects stem from the STEREO-IMPACT (In-situ Measurements of Particles and CME Transients) E/PO program and a grant from the IDEAS (The Initiative to Develop Education through Astronomy and Space Science (IDEAS) Grant Program.
Looi, Valerie; Winter, Philip; Anderson, Ilona; Sucher, Catherine
2011-08-01
The purpose of this study was to develop a music quality rating test battery (MQRTB) and pilot test it by comparing appraisal ratings from cochlear implant (CI) recipients using the fine-structure processing (FSP) and high-definition continuous interleaved sampling (HDCIS) speech processing strategies. The development of the MQRTB involved three stages: (1) Selection of test items for the MQRTB; (2) Verification of its length and complexity with normally-hearing individuals; and (3) Pilot testing with CI recipients. Part 1 involved 65 adult listeners, Part 2 involved 10 normally-hearing adults, and Part 3 involved five adult MED-EL CI recipients. The MQRTB consisted of ten songs, with ratings made on scales assessing pleasantness, naturalness, richness, fullness, sharpness, and roughness. Results of the pilot study, which compared FSP and HDCIS for music, indicated that acclimatization to a strategy had a significant effect on ratings (p < 0.05). When acclimatized to FSP, the group rated FSP as closer to 'exactly as I want it to sound' than HDCIS (p < 0.05), and that HDCIS sounded significantly sharper and rougher than FSP. However when acclimatized to HDCIS, there were no significant differences between ratings. There was no effect of song familiarity or genre on ratings. Overall the results suggest that the use of FSP as the default strategy for MED-EL recipients would have a positive effect on music appreciation, and that the MQRTB is an effective tool for assessing music sound quality.
El Dib, Regina P; Silva, Edina MK; Morais, José F; Trevisani, Virgínia FM
2008-01-01
Background Music is ever present in our daily lives, establishing a link between humans and the arts through the senses and pleasure. Sound technicians are the link between musicians and audiences or consumers. Recently, general concern has arisen regarding occurrences of hearing loss induced by noise from excessively amplified sound-producing activities within leisure and professional environments. Sound technicians' activities expose them to the risk of hearing loss, and consequently put at risk their quality of life, the quality of the musical product and consumers' hearing. The aim of this study was to measure the prevalence of high frequency hearing loss consistent with noise exposure among sound technicians in Brazil and compare this with a control group without occupational noise exposure. Methods This was a cross-sectional study comparing 177 participants in two groups: 82 sound technicians and 95 controls (non-sound technicians). A questionnaire on music listening habits and associated complaints was applied, and data were gathered regarding the professionals' numbers of working hours per day and both groups' hearing complaint and presence of tinnitus. The participants' ear canals were visually inspected using an otoscope. Hearing assessments were performed (tonal and speech audiometry) using a portable digital AD 229 E audiometer funded by FAPESP. Results There was no statistically significant difference between the sound technicians and controls regarding age and gender. Thus, the study sample was homogenous and would be unlikely to lead to bias in the results. A statistically significant difference in hearing loss was observed between the groups: 50% among the sound technicians and 10.5% among the controls. The difference could be addressed to high sound levels. Conclusion The sound technicians presented a higher prevalence of high frequency hearing loss consistent with noise exposure than did the general population, although the possibility of residual
Sonic Kayaks: Environmental monitoring and experimental music by citizens.
Griffiths, Amber G F; Kemp, Kirsty M; Matthews, Kaffe; Garrett, Joanne K; Griffiths, David J
2017-11-01
The Sonic Kayak is a musical instrument used to investigate nature and developed during open hacklab events. The kayaks are rigged with underwater environmental sensors, which allow paddlers to hear real-time water temperature sonifications and underwater sounds, generating live music from the marine world. Sensor data is also logged every second with location, time and date, which allows for fine-scale mapping of water temperatures and underwater noise that was previously unattainable using standard research equipment. The system can be used as a citizen science data collection device, research equipment for professional scientists, or a sound art installation in its own right.
[Music, pulse, heart and sport].
Gasenzer, E R; Leischik, R
2018-02-01
Music, with its various elements, such as rhythm, sound and melody had the unique ability even in prehistoric, ancient and medieval times to have a special fascination for humans. Nowadays, it is impossible to eliminate music from our daily lives. We are accompanied by music in shopping arcades, on the radio, during sport or leisure time activities and in wellness therapy. Ritualized drumming was used in the medical sense to drive away evil spirits or to undergo holy enlightenment. Today we experience the varied effects of music on all sensory organs and we utilize its impact on cardiovascular and neurological rehabilitation, during invasive cardiovascular procedures or during physical activities, such as training or work. The results of recent studies showed positive effects of music on heart rate and in therapeutic treatment (e. g. music therapy). This article pursues the impact of music on the body and the heart and takes sports medical aspects from the past and the present into consideration; however, not all forms of music and not all types of musical activity are equally suitable and are dependent on the type of intervention, the sports activity or form of movement and also on the underlying disease. This article discusses the influence of music on the body, pulse, on the heart and soul in the past and the present day.
Why Do People Like Loud Sound? A Qualitative Study
Welch, David; Fremaux, Guy
2017-01-01
Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address. PMID:28800097
Why Do People Like Loud Sound? A Qualitative Study.
Welch, David; Fremaux, Guy
2017-08-11
Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address.
Do women prefer more complex music around ovulation?
Charlton, Benjamin D; Filippi, Piera; Fitch, W Tecumseh
2012-01-01
The evolutionary origins of music are much debated. One theory holds that the ability to produce complex musical sounds might reflect qualities that are relevant in mate choice contexts and hence, that music is functionally analogous to the sexually-selected acoustic displays of some animals. If so, women may be expected to show heightened preferences for more complex music when they are most fertile. Here, we used computer-generated musical pieces and ovulation predictor kits to test this hypothesis. Our results indicate that women prefer more complex music in general; however, we found no evidence that their preference for more complex music increased around ovulation. Consequently, our findings are not consistent with the hypothesis that a heightened preference/bias in women for more complex music around ovulation could have played a role in the evolution of music. We go on to suggest future studies that could further investigate whether sexual selection played a role in the evolution of this universal aspect of human culture.
Brain Activation During Anticipation of Sound Sequences
Leaver, Amber M.; Van Lare, Jennifer; Zielinski, Brandon; Halpern, Andrea R.; Rauschecker, Josef P.
2010-01-01
Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here we study this predictive “anticipatory imagery” at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging (fMRI). Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in “training” frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains. PMID:19244522
Tsunoda, Koichi; Sekimoto, Sotaro; Itoh, Kenji
2016-06-01
Conclusions The result suggested that mother tongue Japanese and non- mother tongue Japanese differ in their pattern of brain dominance when listening to sounds from the natural world-in particular, insect sounds. These results reveal significant support for previous findings from Tsunoda (in 1970). Objectives This study concentrates on listeners who show clear evidence of a 'speech' brain vs a 'music' brain and determines which side is most active in the processing of insect sounds, using with near-infrared spectroscopy. Methods The present study uses 2-channel Near Infrared Spectroscopy (NIRS) to provide a more direct measure of left- and right-brain activity while participants listen to each of three types of sounds: Japanese speech, Western violin music, or insect sounds. Data were obtained from 33 participants who showed laterality on opposite sides for Japanese speech and Western music. Results Results showed that a majority (80%) of the MJ participants exhibited dominance for insect sounds on the side that was dominant for language, while a majority (62%) of the non-MJ participants exhibited dominance for insect sounds on the side that was dominant for music.
Animal signals and emotion in music: coordinating affect across groups
Bryant, Gregory A.
2013-01-01
Researchers studying the emotional impact of music have not traditionally been concerned with the principled relationship between form and function in evolved animal signals. The acoustic structure of musical forms is related in important ways to emotion perception, and thus research on non-human animal vocalizations is relevant for understanding emotion in music. Musical behavior occurs in cultural contexts that include many other coordinated activities which mark group identity, and can allow people to communicate within and between social alliances. The emotional impact of music might be best understood as a proximate mechanism serving an ultimately social function. Recent work reveals intimate connections between properties of certain animal signals and evocative aspects of human music, including (1) examinations of the role of nonlinearities (e.g., broadband noise) in non-human animal vocalizations, and the analogous production and perception of these features in human music, and (2) an analysis of group musical performances and possible relationships to non-human animal chorusing and emotional contagion effects. Communicative features in music are likely due primarily to evolutionary by-products of phylogenetically older, but still intact communication systems. But in some cases, such as the coordinated rhythmic sounds produced by groups of musicians, our appreciation and emotional engagement might be driven by an adaptive social signaling system. Future empirical work should examine human musical behavior through the comparative lens of behavioral ecology and an adaptationist cognitive science. By this view, particular coordinated sound combinations generated by musicians exploit evolved perceptual response biases – many shared across species – and proliferate through cultural evolutionary processes. PMID:24427146
Effect of sound-related activities on human behaviours and acoustic comfort in urban open spaces.
Meng, Qi; Kang, Jian
2016-12-15
Human activities are important to landscape design and urban planning; however, the effect of sound-related activities on human behaviours and acoustic comfort has not been considered. The objective of this study is to explore how human behaviours and acoustic comfort in urban open spaces can be changed by sound-related activities. On-site measurements were performed at a case study site in Harbin, China, and an acoustic comfort survey was simultaneously conducted. In terms of effect of sound activities on human behaviours, music-related activities caused 5.1-21.5% of persons who pass by the area to stand and watch the activity, while there was a little effect on the number of persons who performed excises during the activity. Human activities generally have little effect on the behaviour of pedestrians when only 1 to 3 persons are involved in the activities, while a deep effect on the behaviour of pedestrians is noted when >6 persons are involved in the activities. In terms of effect of activities on acoustic comfort, music-related activities can increase the sound level from 10.8 to 16.4dBA, while human activities such RS and PC can increase the sound level from 9.6 to 12.8dBA; however, they lead to very different acoustic comfort. The acoustic comfort of persons can differ with activities, for example the acoustic comfort of persons who stand watch can increase by music-related activities, while the acoustic comfort of persons who sit and watch can decrease by human sound-related activities. Some sound-related activities can show opposite trend of acoustic comfort between visitors and citizens. Persons with higher income prefer music sound-related activities, while those with lower income prefer human sound-related activities. Copyright © 2016 Elsevier B.V. All rights reserved.
Long-term music training modulates the recalibration of audiovisual simultaneity.
Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin
2018-07-01
To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.
In dubio pro silentio - Even Loud Music Does Not Facilitate Strenuous Ergometer Exercise.
Kreutz, Gunter; Schorer, Jörg; Sojke, Dominik; Neugebauer, Judith; Bullack, Antje
2018-01-01
Background: Music listening is wide-spread in amateur sports. Ergometer exercise is one such activity which is often performed with loud music. Aim and Hypotheses: We investigated the effects of electronic music at different intensity levels on ergometer performance (physical performance, force on the pedal, pedaling frequency), perceived fatigue and heart rate in healthy adults. We assumed that higher sound intensity levels are associated with greater ergometer performance and less perceived effort, particularly for untrained individuals. Methods: Groups of high trained and low trained healthy males ( N = 40; age = 25.25 years; SD = 3.89 years) were tested individually on an ergometer while electronic dance music was played at 0, 65, 75, and 85 dB. Participants assessed their music experience during the experiment. Results: Majorities of participants rated the music as not too loud (65%), motivating (77.50%), appropriate for this sports exercise (90%), and having the right tempo (67.50%). Participants noticed changes in the acoustical environment with increasing intensity levels, but no further effects on any of the physical or other subjective measures were found for neither of the groups. Therefore, the main hypothesis must be rejected. Discussion: These findings suggest that high loudness levels do not positively influence ergometer performance. The high acceptance of loud music and perceived appropriateness could be based on erroneous beliefs or stereotypes. Reasons for the widespread use of loud music in fitness sports needs further investigation. Reducing loudness during fitness exercise may not compromise physical performance or perceived effort.
Turn Off the Music! Music Impairs Visual Associative Memory Performance in Older Adults.
Reaves, Sarah; Graham, Brittany; Grahn, Jessica; Rabannifard, Parissa; Duarte, Audrey
2016-06-01
Whether we are explicitly listening to it or not, music is prevalent in our environment. Surprisingly, little is known about the effect of environmental music on concurrent cognitive functioning and whether young and older adults are differentially affected by music. Here, we investigated the impact of background music on a concurrent paired associate learning task in healthy young and older adults. Young and older adults listened to music or to silence while simultaneously studying face-name pairs. Participants' memory for the pairs was then tested while listening to either the same or different music. Participants also made subjective ratings about how distracting they found each song to be. Despite the fact that all participants rated music as more distracting to their performance than silence, only older adults' associative memory performance was impaired by music. These results are most consistent with the theory that older adults' failure to inhibit processing of distracting task-irrelevant information, in this case background music, contributes to their memory impairments. These data have important practical implications for older adults' ability to perform cognitively demanding tasks even in what many consider to be an unobtrusive environment. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Nonverbal auditory working memory: Can music indicate the capacity?
Jeong, Eunju; Ryu, Hokyoung
2016-06-01
Different working memory (WM) mechanisms that underlie words, tones, and timbres have been proposed in previous studies. In this regard, the present study developed a WM test with nonverbal sounds and compared it to the conventional verbal WM test. A total of twenty-five, non-music major, right-handed college students were presented with four different types of sounds (words, syllables, pitches, timbres) that varied from two to eight digits in length. Both accuracy and oxygenated hemoglobin (oxyHb) were measured. The results showed significant effects of number of targets on accuracy and sound type on oxyHb. A further analysis showed prefrontal asymmetry with pitch being processed by the right hemisphere (RH) and timbre by the left hemisphere (LH). These findings suggest a potential for employing musical sounds (i.e., pitch and timbre) as a complementary stimuli for conventional nonverbal WM tests, which can additionally examine its asymmetrical roles in the prefrontal regions. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
White, Kevin N.
2007-01-01
Many students in a fourth grade classroom at Logan Elementary School are expressing numerous types of negative behaviors, are not motivated to learn, and do not stay on-task. In an effort to change these students, an action research study was conducted that implemented background music in the classroom. There were ten fourth grade students who…
ERIC Educational Resources Information Center
Kentucky State Dept. of Education, Frankfort.
This document is a statement of the basic music skills that Kentucky students should develop. This skills list does not replace any locally developed curriculum. It is intended as a guide for local school districts in Kentucky in their development of a detailed K-12 curriculum. The skills presented are considered basic to a sound education program…
Musical and poetic creativity and epilepsy.
Hesdorffer, Dale C; Trimble, Michael
2016-04-01
Associations between epilepsy and musical or poetic composition have received little attention. We reviewed the literature on links between poetic and musical skills and epilepsy, limiting this to the Western canon. While several composers were said to have had epilepsy, John Hughes concluded that none of the major classical composers thought to have had epilepsy actually had it. The only composer with epilepsy that we could find was the contemporary composer, Hikari Oe, who has autism and developed epilepsy at age 15years. In his childhood years, his mother found that he had an ability to identify bird sound and keys of songs and began teaching him piano. Hikari is able to compose in his head when his seizures are not severe, but when his seizures worsen, his creativity is lost. Music critics have commented on the simplicity of his musical composition and its monotonous sound. Our failure to find evidence of musical composers with epilepsy finds parallels with poetry where there are virtually no established poets with epilepsy. Those with seizures include Lord George Byron in the setting of terminal illness, Algernon Swinburne who had alcohol-related seizures, Charles Lloyd who had seizures and psychosis, Edward Lear who had childhood onset seizures, and Vachel Lindsay. The possibility that Emily Dickinson had epilepsy is also discussed. It has not been possible to identify great talents with epilepsy who excel in poetic or musical composition. There are few published poets with epilepsy and no great composers. Why is this? Similarities between music and poetry include meter, tone, stress, rhythm, and form, and much poetry is sung with music. It is likely that great musical and poetic compositions demand a greater degree of concentration and memory than is possible in epilepsy, resulting in problems retaining a musical and mathematical structure over time. The lack of association between recognizable neuropsychiatric disorders and these skills is a gateway to
The Five-String Banjo in the Music Classroom
ERIC Educational Resources Information Center
Smith, Kenneth H.
2011-01-01
The banjo is an instrument of unique image and sound. It has a long history in North America from its arrival on slave ships from North Africa to its contemporary use in jazz and popular music. Adding the instrument to the general music classroom can open new realms of timbre and new avenues of exploration into the instruments of cultures around…
A gray matter of taste: sound perception, music cognition, and Baumgarten's aesthetics.
Pannese, Alessia
2012-09-01
Music is an ancient and ubiquitous form of human expression. One important component for which music is sought after is its aesthetic value, whose appreciation has typically been associated with largely learned, culturally determined factors, such as education, exposure, and social pressure. However, neuroscientific evidence shows that the aesthetic response to music is often associated with automatic, physically- and biologically-grounded events, such as shivers, chills, increased heart rate, and motor synchronization, suggesting the existence of an underlying biological platform upon which contextual factors may act. Drawing on philosophical notions and neuroscientific evidence, I argue that, although there is no denying that social and cultural context play a substantial role in shaping the aesthetic response to music, these act upon largely universal, biological mechanisms involved with neural processing. I propose that the simultaneous presence of culturally-influenced and biologically-determined contributions to the aesthetic response to music epitomizes Baumgarten's equation of sensory perception with taste. Taking the argument one step further, I suggest that the heavily embodied aesthetic response to music bridges the cleavage between the two discrepant meanings-the one referring to sensory perception, the other referring to judgments of taste-traditionally attributed to the word "aesthetics" in the sciences and the humanities. Copyright © 2012 Elsevier Ltd. All rights reserved.
Non Linear Assessment of Musical Consonance
NASA Astrophysics Data System (ADS)
Trulla, Lluis Lligoña; Guiliani, Alessandro; Zimatore, Giovanna; Colosimo, Alfredo; Zbilut, Joseph P.
2005-12-01
The position of intervals and the degree of musical consonance can be objectively explained by temporal series formed by mixing two pure sounds covering an octave. This result is achieved by means of Recurrence Quantification Analysis (RQA) without considering neither overtones nor physiological hypotheses. The obtained prediction of a consonance can be considered a novel solution to Galileo's conjecture on the nature of consonance. It constitutes an objective link between musical performance and listeners' hearing activity..
Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion.
Schutz, Michael
2017-01-01
Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally "happy") pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers "trade off" cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music-widely recognized for its artistic significance-complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.
Crowe, Barbara J; Rio, Robin
2004-01-01
This article reviews the use of technology in music therapy practice and research for the purpose of providing music therapy educators and clinicians with specific and accurate accounts of the types and benefits of technology being used in various settings. Additionally, this knowledge will help universities comply with National Association of Schools of Music requirements and help to standardize the education and training of music therapists in this rapidly changing area. Information was gathered through a literature review of music therapy and related professional journals and a wide variety of books and personal communications. More data were gathered in a survey requesting information on current use of technology in education and practice. This solicitation was sent to all American Music Therapy Association approved universities and clinical training directors. Technology applications in music therapy are organized according to the following categories: (a) adapted musical instruments, (b) recording technology, (c) electric/electronic musical instruments, (d) computer applications, (e) medical technology, (f) assistive technology for the disabled, and (g) technology-based music/sound healing practices. The literature reviewed covers 177 books and articles from a span of almost 40 years. Recommendations are made for incorporating technology into music therapy course work and for review and revision of AMTA competencies. The need for an all-encompassing clinical survey of the use of technology in current music therapy practice is also identified.
When the Sound Becomes the Goal. 4E Cognition and Teleomusicality in Early Infancy
Schiavio, Andrea; van der Schyff, Dylan; Kruse-Weber, Silke; Timmers, Renee
2017-01-01
In this paper we explore early musical behaviors through the lenses of the recently emerged “4E” approach to mind, which sees cognitive processes as Embodied, Embedded, Enacted, and Extended. In doing so, we draw from a range of interdisciplinary research, engaging in critical and constructive discussions with both new findings and existing positions. In particular, we refer to observational research by French pedagogue and psychologist François Delalande, who examined infants' first “sound discoveries” and individuated three different musical “conducts” inspired by the “phases of the game” originally postulated by Piaget. Elaborating on such ideas we introduce the notion of “teleomusicality,” which describes the goal-directed behaviors infants adopt to explore and play with sounds. This is distinguished from the developmentally earlier “protomusicality,” which is based on music-like utterances, movements, and emotionally relevant interactions (e.g., with primary caregivers) that do not entail a primary focus on sound itself. The development from protomusicality to teleomusicality is discussed in terms of an “attentive shift” that occurs between 6 and 10 months of age. This forms the basis of a conceptual framework for early musical development that emphasizes the emergence of exploratory, goal-directed (i.e., sound-oriented), and self-organized musical actions in infancy. In line with this, we provide a preliminary taxonomy of teleomusical processes discussing “Original Teleomusical Acts” (OTAs) and “Constituted Teleomusical Acts” (CTAs). We argue that while OTAs can be easily witnessed in infants' exploratory behaviors, CTAs involve the mastery of more specific and complex goal-directed chains of actions central to musical activity. PMID:28993745
Idrobo-Ávila, Ennio H; Loaiza-Correa, Humberto; van Noorden, Leon; Muñoz-Bolaños, Flavio G; Vargas-Cañas, Rubiel
2018-01-01
Background: For some time now, the effects of sound, noise, and music on the human body have been studied. However, despite research done through time, it is still not completely clear what influence, interaction, and effects sounds have on human body. That is why it is necessary to conduct new research on this topic. Thus, in this paper, a systematic review is undertaken in order to integrate research related to several types of sound, both pleasant and unpleasant, specifically noise and music. In addition, it includes as much research as possible to give stakeholders a more general vision about relevant elements regarding methodologies, study subjects, stimulus, analysis, and experimental designs in general. This study has been conducted in order to make a genuine contribution to this area and to perhaps to raise the quality of future research about sound and its effects over ECG signals. Methods: This review was carried out by independent researchers, through three search equations, in four different databases, including: engineering, medicine, and psychology. Inclusion and exclusion criteria were applied and studies published between 1999 and 2017 were considered. The selected documents were read and analyzed independently by each group of researchers and subsequently conclusions were established between all of them. Results: Despite the differences between the outcomes of selected studies, some common factors were found among them. Thus, in noise studies where both BP and HR increased or tended to increase, it was noted that HRV (HF and LF/HF) changes with both sound and noise stimuli, whereas GSR changes with sound and musical stimuli. Furthermore, LF also showed changes with exposure to noise. Conclusion: In many cases, samples displayed a limitation in experimental design, and in diverse studies, there was a lack of a control group. There was a lot of variability in the presented stimuli providing a wide overview of the effects they could produce in humans
The role of music and song in human communication.
Ujfalussy, J
1993-01-01
It is only on the higher level of abstraction and generalization that the two human branches of acoustic communication, speech and music are separated from each other. Speech is primarily adjusted to the conceptual-verbal symbols and representation of an objectified, static world. In the linguistic communication the main role is played by the elements of noise, the consonants. It has never been doubted that music is a kind of communication, the mediator of human relationships, but it has been a question what music wants to express. Since the Pythagoreans, some believe to find the key to interpreting its message in the commun quantifiable nature of the musical medium and the cosmos. Another historical tradition considered music as the direct expression of human emotions. Representants of the doctrine of imitation derived music from the intonation of speech and the text seems for many to be a support to "understand" music. Music separated from the primary source of sound phenomena and their direct sensual effect constructed a specific communication system. It possesses an inestimable potential richness of discrete pitches and times, colours and sound intensity. The infinite potentials of successive and simultaneous combinations are suitable for erecting the audible, dynamic models of human relations and types of behaviour, internal events and interactions, different situations. European polyphony established a strictly regulated, closed syntax of musical communication which comes close to conceptual precision. Its logic is based upon the natural potentials of the kinship of pitches and the human organ of hearing. The live, mobile network of the relations thus created is regulated by a further developed quasi-binary logic.(ABSTRACT TRUNCATED AT 250 WORDS)
Georges, Patrick
2017-01-01
This paper proposes a statistical analysis that captures similarities and differences between classical music composers with the eventual aim to understand why particular composers 'sound' different even if their 'lineages' (influences network) are similar or why they 'sound' alike if their 'lineages' are different. In order to do this we use statistical methods and measures of association or similarity (based on presence/absence of traits such as specific 'ecological' characteristics and personal musical influences) that have been developed in biosystematics, scientometrics, and bibliographic coupling. This paper also represents a first step towards a more ambitious goal of developing an evolutionary model of Western classical music.
The warm, rich sound of valve guitar amplifiers
NASA Astrophysics Data System (ADS)
Keeports, David
2017-03-01
Practical solid state diodes and transistors have made glass valve technology nearly obsolete. Nevertheless, valves survive largely because electric guitar players much prefer the sound of valve amplifiers to the sound of transistor amplifiers. This paper discusses the introductory-level physics behind that preference. Overdriving an amplifier adds harmonics to an input sound. While a moderately overdriven valve amplifier produces strong even harmonics that enhance a sound, an overdriven transistor amplifier creates strong odd harmonics that can cause dissonance. The functioning of a triode valve explains its creation of even and odd harmonics. Music production software enables the examination of both the wave shape and the harmonic content of amplified sounds.
ERIC Educational Resources Information Center
Clarke, Dana
1988-01-01
The article describes a program which introduced classical music to 18 students in a residential treatment program for adolescents with a history of substance abuse. Use as background music progressed to students requesting tape copies for personal use and group attendance at a symphony rehearsal and concert. (DB)
Food approach conditioning and discrimination learning using sound cues in benthic sharks.
Vila Pouca, Catarina; Brown, Culum
2018-07-01
The marine environment is filled with biotic and abiotic sounds. Some of these sounds predict important events that influence fitness while others are unimportant. Individuals can learn specific sound cues and 'soundscapes' and use them for vital activities such as foraging, predator avoidance, communication and orientation. Most research with sounds in elasmobranchs has focused on hearing thresholds and attractiveness to sound sources, but very little is known about their abilities to learn about sounds, especially in benthic species. Here we investigated if juvenile Port Jackson sharks could learn to associate a musical stimulus with a food reward, discriminate between two distinct musical stimuli, and whether individual personality traits were linked to cognitive performance. Five out of eight sharks were successfully conditioned to associate a jazz song with a food reward delivered in a specific corner of the tank. We observed repeatable individual differences in activity and boldness in all eight sharks, but these personality traits were not linked to the learning performance assays we examined. These sharks were later trained in a discrimination task, where they had to distinguish between the same jazz and a novel classical music song, and swim to opposite corners of the tank according to the stimulus played. The sharks' performance to the jazz stimulus declined to chance levels in the discrimination task. Interestingly, some sharks developed a strong side bias to the right, which in some cases was not the correct side for the jazz stimulus.
ERIC Educational Resources Information Center
Gande, Andrea; Kruse-Weber, Silke
2017-01-01
In response to Europe's societal challenges, such as current issues about migration, the Institute of Music Education at the University of Music and Performing Arts Graz established Meet4Music (M4M), a low-threshold community music project. M4M is open to individuals from all sociocultural and musical backgrounds and ages, and provides them with…
Music is Physics. [CD-ROM]. The Science Club.
ERIC Educational Resources Information Center
1999
This CD-ROM, for ages 10-14, provides activities to answer questions such as what sound is; if we can see it; whether it travels faster through air, water, solids, or liquids; and how doctors, sailors, prospectors, architects, and engineers use sound in their work. This disc includes over 100 scientific concepts in music, acoustics, and anatomy;…
Music and the mind: a new interdisciplinary course on the science of musical experience.
Prichard, J Roxanne; Cornett-Murtada, Vanessa
2011-01-01
In this paper the instructors describe a new team-taught transdisciplinary seminar, "Music and Mind: The Science of Musical Experience." The instructors, with backgrounds in music and neuroscience, valued the interdisciplinary approach as a way to capture student interest and to reflect the inherent interconnectivity of neuroscience. The course covered foundational background information about the science of hearing and musical perception and about the phenomenology of musical creation and experience. This two-credit honors course, which attracted students from eleven majors, integrated experiential learning (active listening, journaling, conducting mini-experiments) with rigorous reflection and discussion of academic research. The course culminated in student-led discussions and presentations of final projects around hot topics in the science of music, such as the 'Mozart Effect,' music and religious experience, etc. Although this course was a two-credit seminar, it could easily be expanded to a four-credit lecture or laboratory course. Student evaluations reveal that the course was successful in meeting the learning objectives, that students were intrinsically motivated to learn more about the discipline, and that the team-taught, experiential learning approach was a success.
Music and the Mind: A New Interdisciplinary Course on the Science of Musical Experience
Prichard, J. Roxanne; Cornett-Murtada, Vanessa
2011-01-01
In this paper the instructors describe a new team-taught transdisciplinary seminar, “Music and Mind: The Science of Musical Experience.” The instructors, with backgrounds in music and neuroscience, valued the interdisciplinary approach as a way to capture student interest and to reflect the inherent interconnectivity of neuroscience. The course covered foundational background information about the science of hearing and musical perception and about the phenomenology of musical creation and experience. This two-credit honors course, which attracted students from eleven majors, integrated experiential learning (active listening, journaling, conducting mini-experiments) with rigorous reflection and discussion of academic research. The course culminated in student-led discussions and presentations of final projects around hot topics in the science of music, such as the ‘Mozart Effect,’ music and religious experience, etc. Although this course was a two-credit seminar, it could easily be expanded to a four-credit lecture or laboratory course. Student evaluations reveal that the course was successful in meeting the learning objectives, that students were intrinsically motivated to learn more about the discipline, and that the team-taught, experiential learning approach was a success. PMID:23494097
3 CFR 8389 - Proclamation 8389 of June 2, 2009. African-American Music Appreciation Month, 2009
Code of Federal Regulations, 2010 CFR
2010-01-01
... Music Appreciation Month, 2009 8389 Proclamation 8389 Presidential Documents Proclamations Proclamation 8389 of June 2, 2009 Proc. 8389 African-American Music Appreciation Month, 2009By the President of the... sounds. They have enriched American music and captured the diversity of our Nation. During African...
Sonic Kayaks: Environmental monitoring and experimental music by citizens
Kemp, Kirsty M.; Matthews, Kaffe; Garrett, Joanne K.; Griffiths, David J.
2017-01-01
The Sonic Kayak is a musical instrument used to investigate nature and developed during open hacklab events. The kayaks are rigged with underwater environmental sensors, which allow paddlers to hear real-time water temperature sonifications and underwater sounds, generating live music from the marine world. Sensor data is also logged every second with location, time and date, which allows for fine-scale mapping of water temperatures and underwater noise that was previously unattainable using standard research equipment. The system can be used as a citizen science data collection device, research equipment for professional scientists, or a sound art installation in its own right. PMID:29190283
In dubio pro silentio – Even Loud Music Does Not Facilitate Strenuous Ergometer Exercise
Kreutz, Gunter; Schorer, Jörg; Sojke, Dominik; Neugebauer, Judith; Bullack, Antje
2018-01-01
Background: Music listening is wide-spread in amateur sports. Ergometer exercise is one such activity which is often performed with loud music. Aim and Hypotheses: We investigated the effects of electronic music at different intensity levels on ergometer performance (physical performance, force on the pedal, pedaling frequency), perceived fatigue and heart rate in healthy adults. We assumed that higher sound intensity levels are associated with greater ergometer performance and less perceived effort, particularly for untrained individuals. Methods: Groups of high trained and low trained healthy males (N = 40; age = 25.25 years; SD = 3.89 years) were tested individually on an ergometer while electronic dance music was played at 0, 65, 75, and 85 dB. Participants assessed their music experience during the experiment. Results: Majorities of participants rated the music as not too loud (65%), motivating (77.50%), appropriate for this sports exercise (90%), and having the right tempo (67.50%). Participants noticed changes in the acoustical environment with increasing intensity levels, but no further effects on any of the physical or other subjective measures were found for neither of the groups. Therefore, the main hypothesis must be rejected. Discussion: These findings suggest that high loudness levels do not positively influence ergometer performance. The high acceptance of loud music and perceived appropriateness could be based on erroneous beliefs or stereotypes. Reasons for the widespread use of loud music in fitness sports needs further investigation. Reducing loudness during fitness exercise may not compromise physical performance or perceived effort. PMID:29867622
Music close to one's heart: heart rate variability with music, diagnostic with e-bra and smartphone
NASA Astrophysics Data System (ADS)
Hegde, Shantala; Kumar, Prashanth S.; Rai, Pratyush; Mathur, Gyanesh N.; Varadan, Vijay K.
2012-04-01
Music is a powerful elicitor of emotions. Emotions evoked by music, through autonomic correlates have been shown to cause significant modulation of parameters like heart rate and blood pressure. Consequently, Heart Rate Variability (HRV) analysis can be a powerful tool to explore evidence based therapeutic functions of music and conduct empirical studies on effect of musical emotion on heart function. However, there are limitations with current studies. HRV analysis has produced variable results to different emotions evoked via music, owing to variability in the methodology and the nature of music chosen. Therefore, a pragmatic understanding of HRV correlates of musical emotion in individuals listening to specifically chosen music whilst carrying out day to day routine activities is needed. In the present study, we aim to study HRV as a single case study, using an e-bra with nano-sensors to record heart rate in real time. The e-bra developed previously, has several salient features that make it conducive for this study- fully integrated garment, dry electrodes for easy use and unrestricted mobility. The study considers two experimental conditions:- First, HRV will be recorded when there is no music in the background and second, when music chosen by the researcher and by the subject is playing in the background.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Hoynck, Michael
2005-01-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Höynck, Michael
2004-12-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
Mentoring Music Educators in Gospel Music Pedagogy in the Classroom
ERIC Educational Resources Information Center
Turner, Patrice Elizabeth
2009-01-01
Since the early 20th century, gospel music has become increasingly popular in the United States. The popularity is making it appealing to perform in public schools. However, many choral and general music educators did not experience the tradition during their formative years and/or have not received training or background in its instruction. …
Inexpensive Instruments for a Sound Unit
NASA Astrophysics Data System (ADS)
Brazzle, Bob
2011-04-01
My unit on sound and waves is embedded within a long-term project in which my high school students construct a musical instrument out of common materials. The unit culminates with a performance assessment: students play the first four measures of "Somewhere Over the Rainbow"—chosen because of the octave interval of the first two notes—in the key of C, and write a short paper describing the theory underlying their instrument. My students have done this project for the past three years, and it continues to evolve. This year I added new instructional materials that I developed using a freeware program called Audacity. This software is very intuitive, and my students used it to develop their musical instruments. In this paper I will describe some of the inexpensive instructional materials in my sound unit, and how they fit with my learning goals.
Music exposure and hearing disorders: an overview.
Zhao, Fei; Manchaiah, Vinaya K C; French, David; Price, Sharon M
2010-01-01
It has been generally accepted that excessive exposure to loud music causes various hearing symptoms (e.g. tinnitus) and consequently leads to a risk of permanent hearing damage, known as noise-induced hearing loss (NIHL). Such potential risk of NIHL due to loud music exposure has been widely investigated in musicians and people working in music venues. With advancements in sound technology and rapid developments in the music industry, increasing numbers of people, particularly adolescents and young adults, are exposing themselves to music on a voluntary basis at potentially harmful levels, and over a substantial period of time, which can also cause NIHL. However, because of insufficient audiometric evidence of hearing loss caused purely by music exposure, there is still disagreement and speculation about the risk of hearing loss from music exposure alone. Many studies have suggested using advanced audiological measurements as more sensitive and efficient tools to monitor hearing status as early indicators of cochlear dysfunction. The purpose of this review is to provide further insight into the potential risk of hearing loss caused by exposure to loud music, and thus contribute to further raising awareness of music induced hearing loss.
A computational study on outliers in world music.
Panteli, Maria; Benetos, Emmanouil; Dixon, Simon
2017-01-01
The comparative analysis of world music cultures has been the focus of several ethnomusicological studies in the last century. With the advances of Music Information Retrieval and the increased accessibility of sound archives, large-scale analysis of world music with computational tools is today feasible. We investigate music similarity in a corpus of 8200 recordings of folk and traditional music from 137 countries around the world. In particular, we aim to identify music recordings that are most distinct compared to the rest of our corpus. We refer to these recordings as 'outliers'. We use signal processing tools to extract music information from audio recordings, data mining to quantify similarity and detect outliers, and spatial statistics to account for geographical correlation. Our findings suggest that Botswana is the country with the most distinct recordings in the corpus and China is the country with the most distinct recordings when considering spatial correlation. Our analysis includes a comparison of musical attributes and styles that contribute to the 'uniqueness' of the music of each country.
A computational study on outliers in world music
Benetos, Emmanouil; Dixon, Simon
2017-01-01
The comparative analysis of world music cultures has been the focus of several ethnomusicological studies in the last century. With the advances of Music Information Retrieval and the increased accessibility of sound archives, large-scale analysis of world music with computational tools is today feasible. We investigate music similarity in a corpus of 8200 recordings of folk and traditional music from 137 countries around the world. In particular, we aim to identify music recordings that are most distinct compared to the rest of our corpus. We refer to these recordings as ‘outliers’. We use signal processing tools to extract music information from audio recordings, data mining to quantify similarity and detect outliers, and spatial statistics to account for geographical correlation. Our findings suggest that Botswana is the country with the most distinct recordings in the corpus and China is the country with the most distinct recordings when considering spatial correlation. Our analysis includes a comparison of musical attributes and styles that contribute to the ‘uniqueness’ of the music of each country. PMID:29253027
Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion
Schutz, Michael
2017-01-01
Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy”) pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech. PMID:29249997
The Musical Culture of Young Adults and Its Relevance to Education for Librarianship.
ERIC Educational Resources Information Center
Stevenson, Gordon
Because of the important role music plays in the lives of young adults, the graduate education of young adult librarians should include a study of the music and the musical behavior of young adults. A formal course might include reviews of research in these areas: (1) the sound recording industry and the economic factors which determine what is…
Gender and the performance of music
Sergeant, Desmond C.; Himonides, Evangelos
2014-01-01
This study evaluates propositions that have appeared in the literature that music phenomena are gendered. Were they present in the musical “message,” gendered qualities might be imparted at any of three stages of the music–communication interchange: the process of composition, its realization into sound by the performer, or imposed by the listener in the process of perception. The research was designed to obtain empirical evidence to enable evaluation of claims of the presence of gendering at these three stages. Three research hypotheses were identified and relevant literature of music behaviors and perception reviewed. New instruments of measurement were constructed to test the three hypotheses: (i) two listening sequences each containing 35 extracts from published recordings of compositions of the classical music repertoire, (ii) four “music characteristics” scales, with polarities defined by verbal descriptors designed to assess the dynamic and emotional valence of the musical extracts featured in the listening sequences. 69 musically-trained listeners listened to the two sequences and were asked to identify the sex of the performing artist of each musical extract; a second group of 23 listeners evaluated the extracts applying the four music characteristics scales. Results did not support claims that music structures are inherently gendered, nor proposals that performers impart their own-sex-specific qualities to the music. It is concluded that gendered properties are imposed subjectively by the listener, and these are primarily related to the tempo of the music. PMID:24795663
Plastic modes of listening: affordance in constructed sound environments
NASA Astrophysics Data System (ADS)
Sjolin, Anders
This thesis is concerned with how the ecological approach to perception with the inclusion of listening modes, informs the creation of sound art installation, or more specifically as referred to in this thesis as constructed sound environments. The basis for the thesis has been a practiced based research where the aim and purpose of the written part of this PhD project has been to critically investigate the area of sound art, in order to map various approaches towards participating in and listening to a constructed sound environment. The main areas has been the notion of affordance as coined by James J. Gibson (1986), listening modes as coined by Pierre Schaeffer (1966) and further developed by Michel Chion (1994), aural architects as coined by Blesser and Salter (2007) and the holistic approach towards understanding sound art developed by Brandon LaBelle (2006). The findings within the written part of the thesis, based on a qualitative analysis, have informed the practice that has resulted in artefacts in the form of seven constructed sound environments that also functions as case studies for further analysis. The aim of the practice has been to exemplify the methodology, strategy and progress behind the organisation and construction of sound environments The research concerns point towards the acknowledgment of affordance as the crucial factor in understanding a constructed sound environment. The affordance approach govern the idea that perceiving a sound environment is a top-down process where the autonomic quality of a constructed sound environment is based upon the perception of structures of the sound material and its relationship with speaker placement and surrounding space. This enables a researcher to side step the conflicting poles of musical/abstract and non-musical/realistic classification of sound elements and regard these poles as included, not separated elements in the analysis of a constructed sound environment.
The Effect of Music on the Human Stress Response
Thoma, Myriam V.; La Marca, Roberto; Brönnimann, Rebecca; Finkel, Linda; Ehlert, Ulrike; Nater, Urs M.
2013-01-01
Background Music listening has been suggested to beneficially impact health via stress-reducing effects. However, the existing literature presents itself with a limited number of investigations and with discrepancies in reported findings that may result from methodological shortcomings (e.g. small sample size, no valid stressor). It was the aim of the current study to address this gap in knowledge and overcome previous shortcomings by thoroughly examining music effects across endocrine, autonomic, cognitive, and emotional domains of the human stress response. Methods Sixty healthy female volunteers (mean age = 25 years) were exposed to a standardized psychosocial stress test after having been randomly assigned to one of three different conditions prior to the stress test: 1) relaxing music (‘Miserere’, Allegri) (RM), 2) sound of rippling water (SW), and 3) rest without acoustic stimulation (R). Salivary cortisol and salivary alpha-amylase (sAA), heart rate (HR), respiratory sinus arrhythmia (RSA), subjective stress perception and anxiety were repeatedly assessed in all subjects. We hypothesized that listening to RM prior to the stress test, compared to SW or R would result in a decreased stress response across all measured parameters. Results The three conditions significantly differed regarding cortisol response (p = 0.025) to the stressor, with highest concentrations in the RM and lowest in the SW condition. After the stressor, sAA (p=0.026) baseline values were reached considerably faster in the RM group than in the R group. HR and psychological measures did not significantly differ between groups. Conclusion Our findings indicate that music listening impacted the psychobiological stress system. Listening to music prior to a standardized stressor predominantly affected the autonomic nervous system (in terms of a faster recovery), and to a lesser degree the endocrine and psychological stress response. These findings may help better understanding the
Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.
Bidelman, Gavin M; Grall, Jeremy
2014-11-01
Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes. Copyright © 2014 Elsevier Inc. All rights reserved.
Idrobo-Ávila, Ennio H.; Loaiza-Correa, Humberto; van Noorden, Leon; Muñoz-Bolaños, Flavio G.; Vargas-Cañas, Rubiel
2018-01-01
Background: For some time now, the effects of sound, noise, and music on the human body have been studied. However, despite research done through time, it is still not completely clear what influence, interaction, and effects sounds have on human body. That is why it is necessary to conduct new research on this topic. Thus, in this paper, a systematic review is undertaken in order to integrate research related to several types of sound, both pleasant and unpleasant, specifically noise and music. In addition, it includes as much research as possible to give stakeholders a more general vision about relevant elements regarding methodologies, study subjects, stimulus, analysis, and experimental designs in general. This study has been conducted in order to make a genuine contribution to this area and to perhaps to raise the quality of future research about sound and its effects over ECG signals. Methods: This review was carried out by independent researchers, through three search equations, in four different databases, including: engineering, medicine, and psychology. Inclusion and exclusion criteria were applied and studies published between 1999 and 2017 were considered. The selected documents were read and analyzed independently by each group of researchers and subsequently conclusions were established between all of them. Results: Despite the differences between the outcomes of selected studies, some common factors were found among them. Thus, in noise studies where both BP and HR increased or tended to increase, it was noted that HRV (HF and LF/HF) changes with both sound and noise stimuli, whereas GSR changes with sound and musical stimuli. Furthermore, LF also showed changes with exposure to noise. Conclusion: In many cases, samples displayed a limitation in experimental design, and in diverse studies, there was a lack of a control group. There was a lot of variability in the presented stimuli providing a wide overview of the effects they could produce in humans
Effects of music on sedation depth and sedative use during pediatric dental procedures.
Ozkalayci, Ozlem; Araz, Coskun; Cehreli, Sevi Burcak; Tirali, Resmiye Ebru; Kayhan, Zeynep
2016-11-01
The study aimed to investigate the effects of listening to music or providing sound isolation on the depth of sedation and need for sedatives in pediatric dental patients. Prospective, randomized, and controlled study. Tertiary, university hospital. In total, 180 pediatric patients, American Society of Anesthesiologists physical status I and II, who were scheduled for dental procedures of tooth extraction, filling, amputation, and root treatment. Patients were categorized into 3 groups: music, isolation, and control. During the procedures, the patients in the music group listened to Vivaldi's The Four Seasons violin concertos by sound-isolating headphones, whereas the patients in the isolation group wore the headphones but did not listen to music. All patients were sedated by 0.1 mg/kg midazolam and 1 mg/kg propofol. During the procedure, an additional 0.5 mg/kg propofol was administered as required. Bispectral index was used for quantifying the depth of sedation, and total dosage of the propofol was used for sedative requirements. The patients' heart rates, oxygen saturations, and Observer's Assessment of Alertness and Sedation Scale and bispectral index scores, which were monitored during the operation, were similar among the groups. In terms of the amount of propofol used, the groups were similar. Prolonged postoperative recovery cases were found to be significantly frequent in the control group, according to the recovery duration measurements (P = .004). Listening to music or providing sound isolation during pediatric dental interventions did not alter the sedation level, amount of medication, and hemodynamic variables significantly. This result might be due to the deep sedation levels reached during the procedures. However, listening to music and providing sound isolation might have contributed in shortening the postoperative recovery duration of the patients. Copyright © 2016 Elsevier Inc. All rights reserved.
TauG-guidance of transients in expressive musical performance.
Schogler, Benjaman; Pepping, Gert-Jan; Lee, David N
2008-08-01
The sounds in expressive musical performance, and the movements that produce them, offer insight into temporal patterns in the brain that generate expression. To gain understanding of these brain patterns, we analyzed two types of transient sounds, and the movements that produced them, during a vocal duet and a bass solo. The transient sounds studied were inter-tone f (0)(t)-glides (the continuous change in fundamental frequency, f (0)(t), when gliding from one tone to the next), and attack intensity-glides (the continuous rise in sound intensity when attacking, or initiating, a tone). The temporal patterns of the inter-tone f (0)(t)-glides and attack intensity-glides, and of the movements producing them, all conformed to the mathematical function, tau (G)(t) (called tauG), predicted by General Tau Theory, and assumed to be generated in the brain. The values of the parameters of the tau (G)(t) function were modulated by the performers when they modulated musical expression. Thus the tau (G)(t) function appears to be a fundamental of brain activity entailed in the generation of expressive temporal patterns of movement and sound.
The sound intensity and characteristics of variable-pitch pulse oximeters.
Yamanaka, Hiroo; Haruna, Junichi; Mashimo, Takashi; Akita, Takeshi; Kinouchi, Keiko
2008-06-01
Various studies worldwide have found that sound levels in hospitals significantly exceed the World Health Organization (WHO) guidelines, and that this noise is associated with audible signals from various medical devices. The pulse oximeter is now widely used in health care; however the health effects associated with the noise from this equipment remain largely unclarified. Here, we analyzed the sounds of variable-pitch pulse oximeters, and discussed the possible associated risk of sleep disturbance, annoyance, and hearing loss. The Nellcor N 595 and Masimo SET Radical pulse oximeters were measured for equivalent continuous A-weighted sound pressure levels (L(Aeq)), loudness levels, and loudness. Pulse beep pitches were also identified using Fast Fourier Transform (FFT) analysis and compared with musical pitches as controls. Almost all alarm sounds and pulse beeps from the instruments tested exceeded 30 dBA, a level that may induce sleep disturbance and annoyance. Several alarm sounds emitted by the pulse oximeters exceeded 70 dBA, which is known to induce hearing loss. The loudness of the alarm sound of each pulse oximeter did not change in proportion to the sound volume level. The pitch of each pulse beep did not correspond to musical pitch levels. The results indicate that sounds from pulse oximeters pose a potential risk of not only sleep disturbance and annoyance but also hearing loss, and that these sounds are unnatural for human auditory perception.
Anyanwu, Emeka G
2015-06-01
Notable challenges, such as mental distress, boredom, negative moods, and attitudes, have been associated with learning in the cadaver dissection laboratory (CDL). The ability of background music (BM) to enhance the cognitive abilities of students is well documented. The present study was designed to investigate the impact of BM in the CDL and on stress associated with the dissection experience. After 8 wk of normal dissection without BM, various genres of BM were introduced into the cadaver dissection sessions of 260 medical and dental students for 3 wk. Feedback on the impact of BM on students in the CDL and students' attitude were accessed using a questionnaire. Psychological stress assessment was done using Psychological Stress Measure 9. Two batches of 30 students each were made to dissect same areas of the body for 2 h, one batch with BM playing and the other batch without. The same examination was given to both groups at the end. Over 90% of the participants expressed a desire to incorporate BM into the CDL; 87% of the sampled population that expressed love for music also reported BM to be a very useful tool that could be used to enhance learning conditions in the CDL. A strong positive relationship was established between love for music and its perception as a tool for learning in the CDL (P < 0.001). Students that studied under the influence of BM had significantly higher scores (P < 0.001) in the overall examination result. BM reduced the level of stress associated with the dissection experience by ∼33%. Copyright © 2015 The American Physiological Society.
Indifference to dissonance in native Amazonians reveals cultural variation in music perception.
McDermott, Josh H; Schultz, Alan F; Undurraga, Eduardo A; Godoy, Ricardo A
2016-07-28
by biology remains debated. One widely discussed phenomenon is that some combinations of notes are perceived by Westerners as pleasant, or consonant, whereas others are perceived as unpleasant,or dissonant. The contrast between consonance and dissonance is central to Western music and its origins have fascinated scholars since the ancient Greeks. Aesthetic responses to consonance are commonly assumed by scientists to have biological roots, and thus to be universally present in humans. Ethnomusicologists and composers, in contrast, have argued that consonance is a creation of Western musical culture. The issue has remained unresolved, partly because little is known about the extent of cross-cultural variation in consonance preferences. Here we report experiments with the Tsimane'--a native Amazonian society with minimal exposure to Western culture--and comparison populations in Bolivia and the United States that varied in exposure to Western music. Participants rated the pleasantness of sounds. Despite exhibiting Western-like discrimination abilities and Western-like aesthetic responses to familiar sounds and acoustic roughness, the Tsimane' rated consonant and dissonant chords and vocal harmonies as equally pleasant. By contrast, Bolivian city- and town-dwellers exhibited significant preferences for consonance,albeit to a lesser degree than US residents. The results indicate that consonance preferences can be absent in cultures sufficiently isolated from Western music, and are thus unlikely to reflect innate biases or exposure to harmonic natural sounds. The observed variation in preferences is presumably determined by exposure to musical harmony, suggesting that culture has a dominant role in shaping aesthetic responses to music.
Information Dynamics and Aspects of Musical Perception
NASA Astrophysics Data System (ADS)
Dubnov, Shlomo
Musical experience has been often suggested to be related to forming of expectations, their fulfillment or denial. In terms of information theory, expectancies and predictions serve to reduce uncertainty about the future and might be used to efficiently represent and "compress" data. In this chapter we present an information theoretic model of musical listening based on the idea that expectations that arise from past musical material are framing our appraisal of what comes next, and that this process eventually results in creation of emotions or feelings. Using a notion of "information rate" we can measure the amount of information between past and present in the musical signal on different time scales using statistics of sound spectral features. Several musical pieces are analyzed in terms of short and long term information rate dynamics and are compared to analysis of musical form and its structural functions. The findings suggest that a relation exists between information dynamics and musical structure that eventually leads to creation of human listening experience and feelings such as "wow" and "aha".
What Music Isn't and How to Teach It
ERIC Educational Resources Information Center
Berleant, Arnold
2009-01-01
Unlike the other arts, music has no direct connection with the rest of the human world. True, there are bird songs and natural "melodies" in the gurgling of brooks, but these are hardly the materials of music in the way that landscape can be the subject of painting. And no natural sounds can stand alone as quasi-artworks the way that the deeply…
Voice Use Among Music Theory Teachers: A Voice Dosimetry and Self-Assessment Study.
Schiller, Isabel S; Morsomme, Dominique; Remacle, Angélique
2017-07-25
This study aimed (1) to investigate music theory teachers' professional and extra-professional vocal loading and background noise exposure, (2) to determine the correlation between vocal loading and background noise, and (3) to determine the correlation between vocal loading and self-evaluation data. Using voice dosimetry, 13 music theory teachers were monitored for one workweek. The parameters analyzed were voice sound pressure level (SPL), fundamental frequency (F0), phonation time, vocal loading index (VLI), and noise SPL. Spearman correlation was used to correlate vocal loading parameters (voice SPL, F0, and phonation time) and noise SPL. Each day, the subjects self-assessed their voice using visual analog scales. VLI and self-evaluation data were correlated using Spearman correlation. Vocal loading parameters and noise SPL were significantly higher in the professional than in the extra-professional environment. Voice SPL, phonation time, and female subjects' F0 correlated positively with noise SPL. VLI correlated with self-assessed voice quality, vocal fatigue, and amount of singing and speaking voice produced. Teaching music theory is a profession with high vocal demands. More background noise is associated with increased vocal loading and may indirectly increase the risk for voice disorders. Correlations between VLI and self-assessments suggest that these teachers are well aware of their vocal demands and feel their effect on voice quality and vocal fatigue. Visual analog scales seem to represent a useful tool for subjective vocal loading assessment and associated symptoms in these professional voice users. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Meaningless artificial sound and its application in urban soundscape research
NASA Astrophysics Data System (ADS)
de Coensel, Bert; Botteldooren, Dick
2004-05-01
Urban areas are increasingly being overwhelmed with uninteresting (traffic) noise. Designing a more matching soundscape for urban parks, quiet backyards, shopping areas, etc., clearly deserves more attention. Urban planners, being architects rather than musical composers, like to have a set of ``objective'' indicators of the urban soundscape at their disposal. In deriving such indicators, one can assume that the soundscape is appreciated as a conglomerate of sound events, recognized as originating from individual sources by people evaluating it. A more recent line of research assumes that the soundscape as a whole evokes particular emotions. In this research project we follow the latter, more holistic view. Given this choice, the challenge is to create a test setup where subjects are not tempted to react to a sound in a cognitive way, analyzing it to its individual components. Meaningless sound is therefore preferred. After selection of appealing sounds for a given context by subjects, objective indicators can then be extracted. To generate long, complex, but meaningless sound fragments not containing repetition, based on a limited number of parameters, swarm technology is used. This technique has previously been used for creating artificial music and has proved to be very useful.
ERIC Educational Resources Information Center
Fante, Cheryl H.
This study was conducted in an attempt to identify any predictor or combination of predictors of a beginning typewriting student's success. Variables of intelligence, rhythmic ability, musical background, and tapping ability were combined to study their relationship to typewriting speed and accuracy. A sample of 109 high school students was…
Music Education and Deliberative Democracy
ERIC Educational Resources Information Center
Bladh, Stephan; Heimonen, Marja
2007-01-01
In this paper, the authors discuss the influence of democracy and law on music education in Sweden and Finland, and the potential for music education as training in democracy. The latter consideration can be instructive regardless of the nation, or its laws and paradigms of music education. The theoretical background is based on Jurgen Habermas'…