Proton adsorption onto alumina: extension of multisite complexation (MUSIC) theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagashima, K.; Blum, F.D.
1999-09-01
The adsorption isotherm of protons onto a commercial {gamma}-alumina sample was determined in aqueous nitric acid with sodium nitrate as a background electrolyte. Three discrete regions could be discerned in the log-log plots of the proton isotherm determined at the solution pH 5 to 2. The multisite complexation (MUSIC) model was modified to analyze the simultaneous adsorption of protons onto various kinds of surface species.
Multisite adsorption of cadmium on goethite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venema, P.; Hiemstra, T.; Riemsdijk, W.H. van
1996-11-10
Recently a new general ion adsorption model has been developed for ion binding to mineral surfaces (Hiemstra and van Riemsdijk, 1996). The model uses the Pauling concept of charge distribution (CD) and is an extension of the multi-site complexation (MUSIC) approach. In the CD-MUSIC model the charge of an adsorbing ion that forms an inner sphere complex is distributed over its ligands, which are present in two different electrostatic planes. In this paper the authors have applied the CD-MUSIC model to the adsorption of metal cations, using an extended data set for cadmium adsorbing on goethite. The adsorption of cadmiummore » and the cadmium-proton exchange ratio were measured as function of metal ion concentration, pH, and ionic strength. The data could be described well, taking into account the surface heterogeneity resulting from the presence of two different crystal planes (the dominant 110 face and the minor 021 face). The surface species used in the model are consistent with recent EXAFS data. In accordance with the EXAFS results, high-affinity complexes at the 021 face were used in the model.« less
ERIC Educational Resources Information Center
Karlsen, Sidsel
2014-01-01
In this article, a multi-sited ethnographic study was taken as a point of departure for exploring how Nordic music teachers, who work in multicultural environments, understand the development of their students' musical agency. The study was based on theories developed within general sociology and the sociology of music, as well as in previous…
ERIC Educational Resources Information Center
Karlsen, Sidsel
2012-01-01
The aim of this article is to explore how immigrant students experience and enact musical agency inside and outside the music lessons in three Nordic lower secondary schools. The research was designed as a multi-sited ethnographic study and the data were collected in Helsinki, Stockholm and Oslo through classroom observations and interviews with…
Developing Music Teacher Identities: An International Multi-Site Study
ERIC Educational Resources Information Center
Ballantyne, Julie; Kerchner, Jody L.; Arostegui, Jose Luis
2012-01-01
This study investigates pre-service music teacher's (PSMT) perceptions of their professional identities. University-level education students in the United States America (USA), Spain and Australia were all asked interview questions based on general themes relevant to teacher identity development, and their responses were subjected to content…
Weisskirch, Robert S; Zamboanga, Byron L; Ravert, Russell D; Whitbourne, Susan Krauss; Park, Irene J K; Lee, Richard M; Schwartz, Seth J
2013-04-01
The Multi-Site University Study of Identity and Culture (MUSIC) is the product of a research collaboration among faculty members from 30 colleges and universities from across the United States. Using Katz and Martin's (1997, p. 7) definition, the MUSIC research collaboration is "the working together of researchers to achieve the common goals of producing new scientific knowledge." The collaboration involved more than just coauthorship; it served "as a strategy to insert more energy, optimism, creativity and hope into the work of [researchers]" (Conoley & Conoley, 2010, p. 77). The philosophy underlying the MUSIC collaborative was intended to foster natural collaborations among researchers, to provide opportunities for scholarship and mentorship for early career and established researchers, and to support exploration of identity, cultural, and ethnic/racial research ideas by tapping the expertise and interests of the broad MUSIC network of collaborators. In this issue, five research articles present innovative findings from the MUSIC datasets. There are two themes across the articles. Research is emerging about broadening the constructs and measures of acculturation and ethnic identity and their relation to health risk behaviors and psychosocial and mental health outcomes. The second theme is about the relationship of perceived discrimination on behavioral and mental health outcomes among immigrant populations.
NASA Astrophysics Data System (ADS)
Fitts, Jeffrey P.; Machesky, Michael L.; Wesolowski, David J.; Shang, Xiaoming; Kubicki, James D.; Flynn, George W.; Heinz, Tony F.; Eisenthal, Kenneth B.
2005-08-01
The pH of zero net surface charge (pH pzc) of the α-TiO 2 (1 1 0) surface was characterized using second-harmonic generation (SHG) spectroscopy. The SHG response was monitored during a series of pH titrations conducted at three NaNO 3 concentrations. The measured pH pzc is compared with a pH pzc value calculated using the revised MUltiSIte Complexation (MUSIC) model of surface oxygen protonation. MUSIC model input parameters were independently derived from ab initio calculations of relaxed surface bond lengths for a hydrated surface. Model (pH pzc 4.76) and experiment (pH pzc 4.8 ± 0.3) agreement establishes the incorporation of independently derived structural parameters into predictive models of oxide surface reactivity.
Phillips-Salimi, Celeste R; Donovan Stickler, Molly A; Stegenga, Kristin; Lee, Melissa; Haase, Joan E
2011-08-01
Although treatment fidelity strategies for enhancing the integrity of behavioral interventions have been well described, little has been written about monitoring data collection integrity. This article describes the principles and strategies developed to monitor data collection integrity of the "Stories and Music for Adolescent/Young Adult Resilience During Transplant" study (R01NR008583, U10CA098543, and U10CA095861)-a multi-site Children's Oncology Group randomized clinical trial of a music therapy intervention for adolescents and young adults undergoing stem cell transplant. The principles and strategies outlined in this article provide one model for development and evaluation of a data collection integrity monitoring plan for behavioral interventions that may be adapted by investigators and may be useful to funding agencies and grant application reviewers in evaluating proposals. Copyright © 2011 Wiley Periodicals, Inc.
Tang, Céline; Giaume, Domitille; Guerlou-Demourgues, Liliane; Lefèvre, Grégory; Barboux, Philippe
2018-05-30
To design novel layered materials, bottom-up strategy is very promising. It consists of (1) synthesizing various layered oxides, (2) exfoliating them, then (3) restacking them in a controlled way. The last step is based on electrostatic interactions between different layered oxides and is difficult to control. The aim of this study is to facilitate this step by predicting the isoelectric point (IEP) of exfoliated materials. The Multisite Complexation model (MUSIC) was used for this objective and was shown to be able to predict IEP from the mean oxidation state of the metal in the (hydr)oxides, as the main parameter. Moreover, the effect of exfoliation on IEP has also been calculated. Starting from platelets with a high basal surface area over total surface area, we show that the exfoliation process has no impact on calculated IEP value, as verified with experiments. Moreover, the restacked materials containing different monometallic (hydr)oxide layers also have an IEP consistent with values calculated with the model. This study proves that MUSIC model is a useful tool to predict IEP of various complex metal oxides and hydroxides.
Robb, Sheri L; Burns, Debra S; Docherty, Sharron L; Haase, Joan E
2011-11-01
The Stories and Music for Adolescent/Young Adult Resilience during Transplant (SMART) study (R01NR008583; U10CA098543; U10CA095861) is an ongoing multi-site Children's Oncology Group randomized clinical trial testing the efficacy of a therapeutic music video intervention for adolescents/young adults (11-24 years of age) with cancer undergoing stem cell transplant. Treatment fidelity strategies from our trial are consistent with the National Institutes of Health (NIH) Behavior Change Consortium Treatment Fidelity Workgroup (BCC) recommendations and provide a successful working model for treatment fidelity implementation in a large, multi-site behavioral intervention study. In this paper, we summarize 20 specific treatment fidelity strategies used in the SMART trial and how these strategies correspond with NIH BCC recommendations in five specific areas: (1) study design, (2) training providers, (3) delivery of treatment, (4) receipt of treatment, and (5) enactment of treatment skills. Increased use and reporting of treatment fidelity procedures is essential in advancing the reliability and validity of behavioral intervention research. The SMART trial provides a strong model for the application of fidelity strategies to improve scientific findings and addresses the absence of published literature, illustrating the application of BCC recommendations in behavioral intervention studies. Copyright © 2010 John Wiley & Sons, Ltd.
Interaction of cadmium with phosphate on goethite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venema, P.; Hiemstra, T.; Riemsdijk, W.H. van
1997-08-01
Interactions between different ions are of importance in understanding chemical processes in natural systems. In this study simultaneous adsorption of phosphate and cadmium on goethite is studied in detail. The charge distribution (CD)-multisite complexation (MUSIC) model has been successful in describing extended data sets of cadmium adsorption and phosphate adsorption on goethite. In this study, the parameters of this model for these two data sets were combined to describe a new data set of simultaneous adsorption of cadmium and phosphate on goethite. Attention is focused on the surface speciation of cadmium. With the extra information that can be obtained frommore » the interaction experiments, the cadmium adsorption model is refined. For a perfect description of the data, the singly coordinated surface groups at the 110 face of goethite were assumed to form both monodentate and bidentate surface species with cadmium. The CD-MUSIC model is able to describe data sets of both simultaneous and single adsorption of cadmium and phosphate with the same parameters. The model calculations confirmed the idea that only singly coordinated surface groups are reactive for specific ion binding.« less
Surface structural ion adsorption modeling of competitive binding of oxyanions by metal (hydr)oxides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiemstra, T.; Riemsdijk, W.H. van
1999-02-01
An important challenge in surface complexation models (SCM) is to connect the molecular microscopic reality to macroscopic adsorption phenomena. This study elucidates the primary factor controlling the adsorption process by analyzing the adsorption and competition of PO{sub 4}, AsO{sub 4}, and SeO{sub 3}. The authors show that the structure of the surface-complex acting in the dominant electrostatic field can be ascertained as the primary controlling adsorption factor. The surface species of arsenate are identical with those of phosphate and the adsorption behavior is very similar. On the basis of the selenite adsorption, The authors show that the commonly used 1pKmore » models are incapable to incorporate in the adsorption modeling the correct bidentate binding mechanism found by spectroscopy. The use of the bidentate mechanism leads to a proton-oxyanion ratio and corresponding pH dependence that are too large. The inappropriate intrinsic charge attribution to the primary surface groups and the condensation of the inner sphere surface complex to a point charge are responsible for this behavior of commonly used 2pK models. Both key factors are differently defined in the charge distributed multi-site complexation (CD-MUSIC) model and are based in this model on a surface structural approach. The CD-MUSIC model can successfully describe the macroscopic adsorption phenomena using the surface speciation and binding mechanisms as found by spectroscopy. The model is also able to predict the anion competition well. The charge distribution in the interface is in agreement with the observed structure of surface complexes.« less
ERIC Educational Resources Information Center
Jordan, Michelle E.; Santori, Diane
2015-01-01
This multisite study investigates dialogic literacy events that revolved around narrative and informational texts in two 3rd-grade classrooms. The authors offer a metaphor of musical improvisation to contemplate dialogic literacy events as part of the repertoire of teaching and learning experiences. In literacy learning, where there is much…
Analysis of mercury adsorption at the gibbsite-water interface using the CD-MUSIC model.
Park, Chang Min
2018-05-22
Mercury (Hg), one of the most toxic substances in nature, has long been released during the anthropogenic activity. A correct description of the adsorptive behavior of mercury is important to gain a better insight into its fate and transport in natural mineral surfaces, which will be a prerequisite for the development of surface complexation model for the adsorption processes. In the present study, simulation experiments on macroscopic Hg(II) sorption by gibbsite (α-Al(OH) 3 ), a representative aluminum (hydr)oxide mineral, were performed using the charge distribution and multi-site complexation (CD-MUSIC) approach with 1-pK triple plane model (TPM). For this purpose, several data sets which had already been reported in the literature were employed to analyze the effect of pH, ionic strength, and co-exisiting ions (NO 3 - and Cl - ) on the Hg(II) adsorption onto gibbsite. Sequential optimization approach was used to determine the acidity and asymmetric binding constants for electrolyte ions and the affinity constants of the surface species through the model simulation using FITEQLC (a modified code of FITEQL 4.0). The model successfully incorporated the presence of inorganic ligands at the dominant edge (100) face of gibbsite with consistent surface species, which was evidenced by molecular scale analysis. The model was verified with an independent set of Hg(II) adsorption data incorporating carbonate binding species in an open gibbsite-water system.
Electrophoretic Study of the SnO2/Aqueous Solution Interface up to 260 degrees C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez-Santiago, V; Fedkin, Mark V.; Wesolowski, David J
2009-01-01
An electrophoresis cell developed in our laboratory was utilized to determine the zeta potential at the SnO{sub 2} (cassiterite)/aqueous solution (10{sup -3} mol kg{sup -1} NaCl) interface over the temperature range from 25 to 260 C. Experimental techniques and methods for the calculation of zeta potential at elevated temperature are described. From the obtained zeta potential data as a function of pH, the isoelectric points (IEPs) of SnO{sub 2} were obtained for the first time. From these IEP values, the standard thermodynamic functions were calculated for the protonation-deprotonation equilibrium at the SnO{sub 2} surface, using the 1-pK surface complexation model.more » It was found that the IEP values for SnO{sub 2} decrease with increasing temperature, and this behavior is compared to the predicted values by the multisite complexation (MUSIC) model and other semitheoretical treatments, and were found to be in excellent agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu,S.; Jing, C.; Meng, X.
2008-01-01
The mechanism of arsenic re-mobilization in spent adsorbents under reducing conditions was studied using X-ray absorption spectroscopy and surface complexation model calculations. X-ray absorption near edge structure (XANES) spectroscopy demonstrated that As(V) was partially reduced to As(III) in spent granular ferric hydroxide (GFH), titanium dioxide (TiO2), activated alumina (AA) and modified activated alumina (MAA) adsorbents after 2 years of anaerobic incubation. As(V) was completely reduced to As(III) in spent granular ferric oxide (GFO) under 2-year incubation. The extended X-ray absorption fine structure (EXAFS) spectroscopy analysis showed that As(III) formed bidentate binuclear surface complexes on GFO as evidenced by an averagemore » As(III)-O bond distance of 1.78 Angstroms and As(III)-Fe distance of 3.34 Angstroms . The release of As from the spent GFO and TiO2 was simulated using the charge distribution multi-site complexation (CD-MUSIC) model. The observed redox ranges for As release and sulfate mobility were described by model calculations.« less
Music therapy in pediatric palliative care: family-centered care to enhance quality of life.
Lindenfelser, Kathryn J; Hense, Cherry; McFerran, Katrina
2012-05-01
Research into the value of music therapy in pediatric palliative care (PPC) has identified quality of life as one area of improvement for families caring for a child in the terminal stages of a life-threatening illness. This small-scale investigation collected data in a multisite, international study including Minnesota, USA, and Melbourne, Australia. An exploratory mixed method design used the qualitative data collected through interviews with parents to interpret results from the PedsQL Family Impact Module of overall parental quality of life. Parents described music therapy as resulting in physical improvements of their child by providing comfort and stimulation. They also valued the positive experiences shared by the family in music therapy sessions that were strength oriented and family centered. This highlighted the physical and communication scales within the PedsQL Family Impact Module, where minimal improvements were achieved in contrast to some strong results suggesting diminished quality of life in cognitive and daily activity domains. Despite the significant challenges faced by parents during this difficult time, parents described many positive experiences in music therapy, and the overall score for half of the parents in the study did not diminish. The value of music therapy as a service that addresses the family-centered agenda of PPC is endorsed by this study.
Biphasic responses in multi-site phosphorylation systems.
Suwanmajo, Thapanar; Krishnan, J
2013-12-06
Multi-site phosphorylation systems are repeatedly encountered in cellular biology and multi-site modification is a basic building block of post-translational modification. In this paper, we demonstrate how distributive multi-site modification mechanisms by a single kinase/phosphatase pair can lead to biphasic/partial biphasic dose-response characteristics for the maximally phosphorylated substrate at steady state. We use simulations and analysis to uncover a hidden competing effect which is responsible for this and analyse how it may be accentuated. We build on this to analyse different variants of multi-site phosphorylation mechanisms showing that some mechanisms are intrinsically not capable of displaying this behaviour. This provides both a consolidated understanding of how and under what conditions biphasic responses are obtained in multi-site phosphorylation and a basis for discriminating between different mechanisms based on this. We also demonstrate how this behaviour may be combined with other behaviour such as threshold and bistable responses, demonstrating the capacity of multi-site phosphorylation systems to act as complex molecular signal processors.
NASA Astrophysics Data System (ADS)
Breinl, Korbinian; Di Baldassarre, Giuliano; Girons Lopez, Marc
2017-04-01
We assess uncertainties of multi-site rainfall generation across spatial scales and different climatic conditions. Many research subjects in earth sciences such as floods, droughts or water balance simulations require the generation of long rainfall time series. In large study areas the simulation at multiple sites becomes indispensable to account for the spatial rainfall variability, but becomes more complex compared to a single site due to the intermittent nature of rainfall. Weather generators can be used for extrapolating rainfall time series, and various models have been presented in the literature. Even though the large majority of multi-site rainfall generators is based on similar methods, such as resampling techniques or Markovian processes, they often become too complex. We think that this complexity has been a limit for the application of such tools. Furthermore, the majority of multi-site rainfall generators found in the literature are either not publicly available or intended for being applied at small geographical scales, often only in temperate climates. Here we present a revised, and now publicly available, version of a multi-site rainfall generation code first applied in 2014 in Austria and France, which we call TripleM (Multisite Markov Model). We test this fast and robust code with daily rainfall observations from the United States, in a subtropical, tropical and temperate climate, using rain gauge networks with a maximum site distance above 1,000km, thereby generating one million years of synthetic time series. The modelling of these one million years takes one night on a recent desktop computer. In this research, we first start the simulations with a small station network of three sites and progressively increase the number of sites and the spatial extent, and analyze the changing uncertainties for multiple statistical metrics such as dry and wet spells, rainfall autocorrelation, lagged cross correlations and the inter-annual rainfall variability. Our study contributes to the scientific community of earth sciences and the ongoing debate on extreme precipitation in a changing climate by making a stable, and very easily applicable, multi-site rainfall generation code available to the research community and providing a better understanding of the performance of multi-site rainfall generation depending on spatial scales and climatic conditions.
Point of zero potential of single-crystal electrode/inert electrolyte interface.
Zarzycki, Piotr; Preočanin, Tajana
2012-03-15
Most of the environmentally important processes occur at the specific hydrated mineral faces. Their rates and mechanisms are in part controlled by the interfacial electrostatics, which can be quantitatively described by the point of zero potential (PZP). Unfortunately, the PZP value of specific crystal face is very difficult to be experimentally determined. Here we show that PZP can be extracted from a single-crystal electrode potentiometric titration, assuming the stable electrochemical cell resistivity and lack of specific electrolyte ions sorption. Our method is based on determining a common intersection point of the electrochemical cell electromotive force at various ionic strengths, and it is illustrated for a few selected surfaces of rutile, hematite, silver chloride, and bromide monocrystals. In the case of metal oxides, we have observed the higher PZP values than those theoretically predicted using the MultiSite Complexation Model (MUSIC), that is, 8.4 for (001) hematite (MUSIC-predicted ~6), 8.7 for (110) rutile (MUSIC-predicted ~6), and about 7 for (001) rutile (MUSIC-predicted 6.6). In the case of silver halides, the order of estimated PZP values (6.4 for AgCl<6.5 for AgBr) agrees well with sequence estimated from the silver halide solubility products; however, the halide anions (Cl(-), Br(-)) are attracted toward surface much stronger than the Ag(+) cations. The observed PZPs sequence and strong anions affinity toward silver halide surface can be correlated with ions hydration energies. Presented approach is the complementary one to the hysteresis method reported previously [P. Zarzycki, S. Chatman, T. Preočanin, K.M. Rosso, Langmuir 27 (2011) 7986-7990]. A unique experimental characterization of specific crystal faces provided by these two methods is essential in deeper understanding of environmentally important processes, including migration of heavy and radioactive ions in soils and groundwaters. Copyright © 2012 Elsevier Inc. All rights reserved.
Is there hope for multi-site complexation modeling?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.
2006-06-06
It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained priormore » to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.« less
Dreyfuss, Paul; Henning, Troy; Malladi, Niriksha; Goldstein, Barry; Bogduk, Nikolai
2009-01-01
To determine the physiologic effectiveness of multi-site, multi-depth sacral lateral branch injections. Double-blind, randomized, placebo-controlled study. Outpatient pain management center. Twenty asymptomatic volunteers. The dorsal innervation to the sacroiliac joint (SIJ) is from the L5 dorsal ramus and the S1-3 lateral branches. Multi-site, multi-depth lateral branch blocks were developed to compensate for the complex regional anatomy that limited the effectiveness of single-site, single-depth lateral branch injections. Bilateral multi-site, multi-depth lateral branch green dye injections and subsequent dissection on two cadavers revealed a 91% accuracy with this technique. Session 1: 20 asymptomatic subjects had a 25-g spinal needle probe their interosseous (IO) and dorsal sacroiliac (DSI) ligaments. The inferior dorsal SIJ was entered and capsular distension with contrast medium was performed. Discomfort had to occur with each provocation maneuver and a contained arthrogram was necessary to continue in the study. Session 2: 1 week later; computer randomized, double-blind multi-site, multi-depth lateral branch blocks injections were performed. Ten subjects received active (bupivicaine 0.75%) and 10 subjects received sham (normal saline) multi-site, multi-depth lateral branch injections. Thirty minutes later, provocation testing was repeated with identical methodology used in session 1. Presence or absence of pain for ligamentous probing and SIJ capsular distension. Seventy percent of the active group had an insensate IO and DSI ligaments, and inferior dorsal SIJ vs 0-10% of the sham group. Twenty percent of the active vs 10% of the sham group did not feel repeat capsular distension. Six of seven subjects (86%) retained the ability to feel repeat capsular distension despite an insensate dorsal SIJ complex. Multi-site, multi-depth lateral branch blocks are physiologically effective at a rate of 70%. Multi-site, multi-depth lateral branch blocks do not effectively block the intra-articular portion of the SIJ. There is physiological evidence that the intra-articular portion of the SIJ is innervated from both ventral and dorsal sources. Comparative multi-site, multi-depth lateral branch blocks should be considered a potentially valuable tool to diagnose extra-articular SIJ pain and determine if lateral branch radiofrequency neurotomy may assist one with SIJ pain.
Cross-cultural differences in meter perception.
Kalender, Beste; Trehub, Sandra E; Schellenberg, E Glenn
2013-03-01
We examined the influence of incidental exposure to varied metrical patterns from different musical cultures on the perception of complex metrical structures from an unfamiliar musical culture. Adults who were familiar with Western music only (i.e., simple meters) and those who also had limited familiarity with non-Western music were tested on their perception of metrical organization in unfamiliar (Turkish) music with simple and complex meters. Adults who were familiar with Western music detected meter-violating changes in Turkish music with simple meter but not in Turkish music with complex meter. Adults with some exposure to non-Western music that was unmetered or metrically complex detected meter-violating changes in Turkish music with both simple and complex meters, but they performed better on patterns with a simple meter. The implication is that familiarity with varied metrical structures, including those with a non-isochronous tactus, enhances sensitivity to the metrical organization of unfamiliar music.
Progress in centralised ethics review processes: Implications for multi-site health evaluations.
Prosser, Brenton; Davey, Rachel; Gibson, Diane
2015-04-01
Increasingly, public sector programmes respond to complex social problems that intersect specific fields and individual disciplines. Such responses result in multi-site initiatives that can span nations, jurisdictions, sectors and organisations. The rigorous evaluation of public sector programmes is now a baseline expectation. For evaluations of large and complex multi-site programme initiatives, the processes of ethics review can present a significant challenge. However in recent years, there have been new developments in centralised ethics review processes in many nations. This paper provides the case study of an evaluation of a national, inter-jurisdictional, cross-sector, aged care health initiative and its encounters with Australian centralised ethics review processes. Specifically, the paper considers progress against the key themes of a previous five-year, five nation study (Fitzgerald and Phillips, 2006), which found that centralised ethics review processes would save time, money and effort, as well as contribute to more equitable workloads for researchers and evaluators. The paper concludes with insights for those charged with refining centralised ethics review processes, as well as recommendations for future evaluators of complex multi-site programme initiatives. Copyright © 2015 Elsevier Ltd. All rights reserved.
Do women prefer more complex music around ovulation?
Charlton, Benjamin D; Filippi, Piera; Fitch, W Tecumseh
2012-01-01
The evolutionary origins of music are much debated. One theory holds that the ability to produce complex musical sounds might reflect qualities that are relevant in mate choice contexts and hence, that music is functionally analogous to the sexually-selected acoustic displays of some animals. If so, women may be expected to show heightened preferences for more complex music when they are most fertile. Here, we used computer-generated musical pieces and ovulation predictor kits to test this hypothesis. Our results indicate that women prefer more complex music in general; however, we found no evidence that their preference for more complex music increased around ovulation. Consequently, our findings are not consistent with the hypothesis that a heightened preference/bias in women for more complex music around ovulation could have played a role in the evolution of music. We go on to suggest future studies that could further investigate whether sexual selection played a role in the evolution of this universal aspect of human culture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, C.; Meng, X; Calvache, E
2009-01-01
A nanocrystalline TiO2-based adsorbent was evaluated for the simultaneous removal of As(V), As(III), monomethylarsonic acid (MMA), and dimethylarsinic acid (DMA) in contaminated groundwater. Batch experimental results show that As adsorption followed pseudo-second order rate kinetics. The competitive adsorption was described with the charge distribution multi-site surface complexation model (CD-MUSIC). The groundwater containing an average of 329 ?g L-1 As(III), 246 ?g L-1 As(V), 151 ?g L-1 MMA, and 202 ?g L-1 DMA was continuously passed through a TiO2 filter at an empty bed contact time of 6 min for 4 months. Approximately 11 000, 14 000, and 9900 bed volumesmore » of water had been treated before the As(III), As(V), and MMA concentration in the effluent increased to 10 ?g L-1. However, very little DMA was removed. The EXAFS results demonstrate the existence of a bidentate binuclear As(V) surface complex on spent adsorbent, indicating the oxidation of adsorbed As(III). A nanocrystalline TiO2-based adsorbent could be used for the simultaneous removal of As(V), As(III), MMA, and DMA in contaminated groundwater.« less
Madison, Guy; Schiölde, Gunilla
2017-01-01
Psychological and aesthetic theories predict that music is appreciated at optimal, peak levels of familiarity and complexity, and that appreciation of music exhibits an inverted U-shaped relationship with familiarity as well as complexity. Because increased familiarity conceivably leads to improved processing and less perceived complexity, we test whether there is an interaction between familiarity and complexity. Specifically, increased familiarity should render the music subjectively less complex, and therefore move the apex of the U curve toward greater complexity. A naturalistic listening experiment was conducted, featuring 40 music examples (ME) divided by experts into 4 levels of complexity prior to the main experiment. The MEs were presented 28 times each across a period of approximately 4 weeks, and individual ratings were assessed throughout the experiment. Ratings of liking increased monotonically with repeated listening at all levels of complexity; both the simplest and the most complex MEs were liked more as a function of listening time, without any indication of a U-shaped relation. Although the MEs were previously unknown to the participants, the strongest predictor of liking was familiarity in terms of having listened to similar music before, i.e., familiarity with musical style. We conclude that familiarity is the single most important variable for explaining differences in liking among music, regardless of the complexity of the music. PMID:28408864
NASA Astrophysics Data System (ADS)
Pease, April; Mahmoodi, Korosh; West, Bruce J.
2018-03-01
We present a technique to search for the presence of crucial events in music, based on the analysis of the music volume. Earlier work on this issue was based on the assumption that crucial events correspond to the change of music notes, with the interesting result that the complexity index of the crucial events is mu ~ 2, which is the same inverse power-law index of the dynamics of the brain. The search technique analyzes music volume and confirms the results of the earlier work, thereby contributing to the explanation as to why the brain is sensitive to music, through the phenomenon of complexity matching. Complexity matching has recently been interpreted as the transfer of multifractality from one complex network to another. For this reason we also examine the mulifractality of music, with the observation that the multifractal spectrum of a computer performance is significantly narrower than the multifractal spectrum of a human performance of the same musical score. We conjecture that although crucial events are demonstrably important for information transmission, they alone are not suficient to define musicality, which is more adequately measured by the multifractality spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickmore, Barry R.; Rosso, Kevin M.; Tadanier, Christopher J.
2006-08-15
In a previous contribution, we outlined a method for predicting (hydr)oxy-acid and oxide surface acidity constants based on three main factors: bond valence, Me?O bond ionicity, and molecular shape. Here electrostatics calculations and ab initio molecular dynamics simulations are used to qualitatively show that Me?O bond ionicity controls the extent to which the electrostatic work of proton removal departs from ideality, bond valence controls the extent of solvation of individual functional groups, and bond valence and molecular shape controls local dielectric response. These results are consistent with our model of acidity, but completely at odds with other methods of predictingmore » acidity constants for use in multisite complexation models. In particular, our ab initio molecular dynamics simulations of solvated monomers clearly indicate that hydrogen bonding between (hydr)oxo-groups and water molecules adjusts to obey the valence sum rule, rather than maintaining a fixed valence based on the coordination of the oxygen atom as predicted by the standard MUSIC model.« less
Clark, Imogen N; Baker, Felicity A; Peiris, Casey L; Shoebridge, Georgie; Taylor, Nicholas F
2017-03-01
To evaluate effects of participant-selected music on older adults' achievement of activity levels recommended in the physical activity guidelines following cardiac rehabilitation. A parallel group randomized controlled trial with measurements at Weeks 0, 6 and 26. A multisite outpatient rehabilitation programme of a publicly funded metropolitan health service. Adults aged 60 years and older who had completed a cardiac rehabilitation programme. Experimental participants selected music to support walking with guidance from a music therapist. Control participants received usual care only. The primary outcome was the proportion of participants achieving activity levels recommended in physical activity guidelines. Secondary outcomes compared amounts of physical activity, exercise capacity, cardiac risk factors, and exercise self-efficacy. A total of 56 participants, mean age 68.2 years (SD = 6.5), were randomized to the experimental ( n = 28) and control groups ( n = 28). There were no differences between groups in proportions of participants achieving activity recommended in physical activity guidelines at Week 6 or 26. Secondary outcomes demonstrated between-group differences in male waist circumference at both measurements (Week 6 difference -2.0 cm, 95% CI -4.0 to 0; Week 26 difference -2.8 cm, 95% CI -5.4 to -0.1), and observed effect sizes favoured the experimental group for amounts of physical activity (d = 0.30), exercise capacity (d = 0.48), and blood pressure (d = -0.32). Participant-selected music did not increase the proportion of participants achieving recommended amounts of physical activity, but may have contributed to exercise-related benefits.
On the genre-fication of music: a percolation approach
NASA Astrophysics Data System (ADS)
Lambiotte, R.; Ausloos, M.
2006-03-01
We analyze web-downloaded data on people sharing their music library. By attributing to each music group usual music genres (Rock, Pop ...), and analysing correlations between music groups of different genres with percolation-idea based methods, we probe the reality of these subdivisions and construct a music genre cartography, with a tree representation. We also discuss an alternative objective way to classify music, that is based on the complex structure of the groups audience. Finally, a link is drawn with the theory of hidden variables in complex networks.
Joint attention responses of children with autism spectrum disorder to simple versus complex music.
Kalas, Amy
2012-01-01
Joint attention deficits are viewed as one of the earliest manifestations and most characteristic features of the social deficits in Autism Spectrum Disorder (ASD). The purpose of this study was to examine the effect of simple versus complex music on joint attention of children with ASD. Thirty children with a diagnosis of ASD participated in this study. Fifteen of the participants were diagnosed with severe ASD and 15 were diagnosed with mild/moderate ASD. Each participant took part in six, 10-minute individual music conditions (3 simple & 3 complex) over a 3-week period. Each condition was designed to elicit responses to joint attention. RESULTS indicated a statistically significant interaction between music modality and functioning level. Therefore, the effect of simple versus complex music was dependent on functioning level. Specifically, the Simple Music Condition was more effective in eliciting Responses to Joint Attention (RJA) for children diagnosed with severe ASD, whereas the Complex Music Condition was more effective in eliciting RJA for children diagnosed with mild/moderate ASD. The results of the present study indicate that for children in the severe range of functioning, music that is simple, with clear and predictable pattems, may be most effective in eliciting responses to bids for joint attention. On the contrary, for children in the mild/moderate range of functioning, music that is more complex and variable may be most effective in eliciting responses to bids for joint attention. These results demonstrate that careful manipulation of specific musical elements can help provide the optimal conditions for facilitating joint attention with children with ASD.
The effect of music reinforcement for non-nutritive sucking on nipple feeding of premature infants.
Standley, Jayne M; Cassidy, Jane; Grant, Roy; Cevasco, Andrea; Szuch, Catherine; Nguyen, Judy; Walworth, Darcy; Procelli, Danielle; Jarred, Jennifer; Adams, Kristen
2010-01-01
In this randomized, controlled multi-site study, the pacifier-activated-lullaby system (PAL) was used with 68 premature infants. Dependent variables were (a) total number of days prior to nipple feeding, (b) days of nipple feeding, (c) discharge weight, and (d) overall weight gain. Independent variables included contingent music reinforcement for non-nutritive sucking for PAL intervention at 32 vs. 34 vs. 36 weeks adjusted gestational age (AGA), with each age group subdivided into three trial conditions: control consisting of no PAL used vs. one 15-minute PAL trial vs. three 15-minute PAL trials. At 34 weeks, PAL trials significantly shortened gavage feeding length, and three trials were significantly better than one trial. At 32 weeks, PAL trials lengthened gavage feeding. Female infants learned to nipple feed significantly faster than male infants. It was noted that PAL babies went home sooner after beginning to nipple feed, a trend that was not statistically significant.
Riganello, Francesco; Cortese, Maria D; Arcuri, Francesco; Quintieri, Maria; Dolce, Giuliano
2015-01-01
Activations to pleasant and unpleasant musical stimuli were observed within an extensive neuronal network and different brain structures, as well as in the processing of the syntactic and semantic aspects of the music. Previous studies evidenced a correlation between autonomic activity and emotion evoked by music listening in patients with Disorders of Consciousness (DoC). In this study, we analyzed retrospectively the autonomic response to musical stimuli by mean of normalized units of Low Frequency (nuLF) and Sample Entropy (SampEn) of Heart Rate Variability (HRV) parameters, and their possible correlation to the different complexity of four musical samples (i.e., Mussorgsky, Tchaikovsky, Grieg, and Boccherini) in Healthy subjects and Vegetative State/Unresponsive Wakefulness Syndrome (VS/UWS) patients. The complexity of musical sample was based on Formal Complexity and General Dynamics parameters defined by Imberty's semiology studies. The results showed a significant difference between the two groups for SampEn during the listening of Mussorgsky's music and for nuLF during the listening of Boccherini and Mussorgsky's music. Moreover, the VS/UWS group showed a reduction of nuLF as well as SampEn comparing music of increasing Formal Complexity and General Dynamics. These results put in evidence how the internal structure of the music can change the autonomic response in patients with DoC. Further investigations are required to better comprehend how musical stimulation can modify the autonomic response in DoC patients, in order to administer the stimuli in a more effective way.
Using Graphical Notations to Assess Children's Experiencing of Simple and Complex Musical Fragments
ERIC Educational Resources Information Center
Verschaffel, Lieven; Reybrouck, Mark; Janssens, Marjan; Van Dooren, Wim
2010-01-01
The aim of this study was to analyze children's graphical notations as external representations of their experiencing when listening to simple sonic stimuli and complex musical fragments. More specifically, we assessed the impact of four factors on children's notations: age, musical background, complexity of the fragment, and most salient…
A Stereo Music Preprocessing Scheme for Cochlear Implant Users.
Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc
2015-10-01
Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.
Robb, Sheri L; Clair, Alicia A; Watanabe, Masayo; Monahan, Patrick O; Azzouz, Faouzi; Stouffer, Janice W; Ebberts, Allison; Darsie, Emily; Whitmer, Courtney; Walker, Joey; Nelson, Kirsten; Hanson-Abromeit, Deanna; Lane, Deforia; Hannan, Ann
2008-07-01
Coping theorists argue that environmental factors affect how children perceive and respond to stressful events such as cancer. However, few studies have investigated how particular interventions can change coping behaviors. The active music engagement (AME) intervention was designed to counter stressful qualities of the in-patient hospital environment by introducing three forms of environmental support. The purpose of this multi-site randomized controlled trial was to determine the efficacy of the AME intervention on three coping-related behaviors (i.e. positive facial affect, active engagement, and initiation). Eighty-three participants, ages 4-7, were randomly assigned to one of three conditions: AME (n = 27), music listening (ML; n = 28), or audio storybooks (ASB; n = 28). Conditions were videotaped to facilitate behavioral data collection using time-sampling procedures. After adjusting for baseline differences, repeated measure analyses indicated that AME participants had a significantly higher frequency of coping-related behaviors compared with ML or ASB. Positive facial affect and active engagement were significantly higher during AME compared with ML and ASB (p<0.0001). Initiation was significantly higher during AME than ASB (p<0.05). This study supports the use of the AME intervention to encourage coping-related behaviors in hospitalized children aged 4-7 receiving cancer treatment. (c) 2007 John Wiley & Sons, Ltd.
Carpentier, Sarah M.; Moreno, Sylvain; McIntosh, Anthony R.
2016-01-01
Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5–7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects. PMID:27243611
Carpentier, Sarah M; Moreno, Sylvain; McIntosh, Anthony R
2016-10-01
Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5-7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects.
Kello, Christopher T; Bella, Simone Dalla; Médé, Butovens; Balasubramaniam, Ramesh
2017-10-01
Humans talk, sing and play music. Some species of birds and whales sing long and complex songs. All these behaviours and sounds exhibit hierarchical structure-syllables and notes are positioned within words and musical phrases, words and motives in sentences and musical phrases, and so on. We developed a new method to measure and compare hierarchical temporal structures in speech, song and music. The method identifies temporal events as peaks in the sound amplitude envelope, and quantifies event clustering across a range of timescales using Allan factor (AF) variance. AF variances were analysed and compared for over 200 different recordings from more than 16 different categories of signals, including recordings of speech in different contexts and languages, musical compositions and performances from different genres. Non-human vocalizations from two bird species and two types of marine mammals were also analysed for comparison. The resulting patterns of AF variance across timescales were distinct to each of four natural categories of complex sound: speech, popular music, classical music and complex animal vocalizations. Comparisons within and across categories indicated that nested clustering in longer timescales was more prominent when prosodic variation was greater, and when sounds came from interactions among individuals, including interactions between speakers, musicians, and even killer whales. Nested clustering also was more prominent for music compared with speech, and reflected beat structure for popular music and self-similarity across timescales for classical music. In summary, hierarchical temporal structures reflect the behavioural and social processes underlying complex vocalizations and musical performances. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Ridley, Moira K.; Hiemstra, Tjisse; van Riemsdijk, Willem H.; Machesky, Michael L.
2009-04-01
Acid-base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multi-component mineral-aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488-508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca 2+ and Sr 2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 1 1 0 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Předota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Bénézeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile-water interface: linking molecular and macroscopic properties. Langmuir20, 4954-4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl - which was common to all solutions, but also for Rb + and K +. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na + ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb +, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Mora K.; Hiemstra, T; Van Riemsdijk, Willem H.
Acid base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multicomponent mineral aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise,more » molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488 508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca2+ and Sr2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 110 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Pr edota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Be ne zeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile water interface: linking molecular and macroscopic properties. Langmuir 20, 4954 4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl which was common to all solutions, but also for Rb+ and K+. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na+ ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb+, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.« less
Perception of Western Musical Modes: A Chinese Study.
Fang, Lele; Shang, Junchen; Chen, Nan
2017-01-01
The major mode conveys positive emotion, whereas the minor mode conveys negative emotion. However, previous studies have primarily focused on the emotions induced by Western music in Western participants. The influence of the musical mode (major or minor) on Chinese individuals' perception of Western music is unclear. In the present experiments, we investigated the effects of musical mode and harmonic complexity on psychological perception among Chinese participants. In Experiment 1, the participants ( N = 30) evaluated 24 musical excerpts in five dimensions (pleasure, arousal, dominance, emotional tension, and liking). In Experiment 2, the participants ( N = 40) evaluated 48 musical excerpts. Perceptions of the musical excerpts differed significantly according to mode, even if the stimuli were Western musical excerpts. The major-mode music induced greater pleasure and arousal and produced higher liking ratings than the minor-mode music, whereas the minor-mode music induced greater tension than the major-mode music. Mode did not influence the dominance rating. Perception of Western music was not influenced by harmonic complexity. Moreover, preference for musical mode was influenced by previous exposure to Western music. These results confirm the cross-cultural emotion induction effects of musical modes in Western music.
Kohlberg, Gavriel D; Mancuso, Dean M; Chari, Divya A; Lalwani, Anil K
2015-01-01
Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Compared to the original song, modified versions containing only 1-3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience.
Intelligibility in microbial complex systems: Wittgenstein and the score of life.
Baquero, Fernando; Moya, Andrés
2012-01-01
Knowledge in microbiology is reaching an extreme level of diversification and complexity, which paradoxically results in a strong reduction in the intelligibility of microbial life. In our days, the "score of life" metaphor is more accurate to express the complexity of living systems than the classic "book of life." Music and life can be represented at lower hierarchical levels by music scores and genomic sequences, and such representations have a generational influence in the reproduction of music and life. If music can be considered as a representation of life, such representation remains as unthinkable as life itself. The analysis of scores and genomic sequences might provide mechanistic, phylogenetic, and evolutionary insights into music and life, but not about their real dynamics and nature, which is still maintained unthinkable, as was proposed by Wittgenstein. As complex systems, life or music is composed by thinkable and only showable parts, and a strategy of half-thinking, half-seeing is needed to expand knowledge. Complex models for complex systems, based on experiences on trans-hierarchical integrations, should be developed in order to provide a mixture of legibility and imageability of biological processes, which should lead to higher levels of intelligibility of microbial life.
Intelligibility in microbial complex systems: Wittgenstein and the score of life
Baquero, Fernando; Moya, Andrés
2012-01-01
Knowledge in microbiology is reaching an extreme level of diversification and complexity, which paradoxically results in a strong reduction in the intelligibility of microbial life. In our days, the “score of life” metaphor is more accurate to express the complexity of living systems than the classic “book of life.” Music and life can be represented at lower hierarchical levels by music scores and genomic sequences, and such representations have a generational influence in the reproduction of music and life. If music can be considered as a representation of life, such representation remains as unthinkable as life itself. The analysis of scores and genomic sequences might provide mechanistic, phylogenetic, and evolutionary insights into music and life, but not about their real dynamics and nature, which is still maintained unthinkable, as was proposed by Wittgenstein. As complex systems, life or music is composed by thinkable and only showable parts, and a strategy of half-thinking, half-seeing is needed to expand knowledge. Complex models for complex systems, based on experiences on trans-hierarchical integrations, should be developed in order to provide a mixture of legibility and imageability of biological processes, which should lead to higher levels of intelligibility of microbial life. PMID:22919679
Planning and conducting a multi-institutional project on fatigue.
Nail, L M; Barsevick, A M; Meek, P M; Beck, S L; Jones, L S; Walker, B L; Whitmer, K R; Schwartz, A L; Stephen, S; King, M E
1998-09-01
To describe the process used in proposal development and study implementation for a complex multisite project on cancer treatment-related fatigue (CRF), identify strategies used to manage the project, and provide recommendations for teams planning multisite research. Information derived from project team meeting records, correspondence, proposals, and personal recollection. The project was built on preexisting relationships among the three site investigators who then built a team including faculty, research coordinators, staff nurses, and students. Study sites had a range of organizational models, and the proposal was designed to capitalize on the organizational and resource strengths of each setting. Three team members drawn from outside oncology nursing provided expertise in measurement and experience with fatigue in other populations. Planning meetings were critical to the success of the project. Conference calls, fax technology, and electronic mail were used for communication. Flexibility was important in managing crises and shifting responsibility for specific components of the work. The team documented and evaluated the process used for multisite research, completed a major instrumentation study, and developed a cognitive-behavioral intervention for CRF. Accomplishments during the one-year planning grant exceeded initial expectations. The process of conducting multisite research is complex, especially when the starting point is a planning grant with specific research protocols to be developed and implemented over one year. Explicit planning for decision-making processes to be used throughout the project, acknowledging the differences among the study settings and planning the protocols to capitalize upon those differences, and recruiting a strong research team that included a member with planning grant and team-building expertise were essential elements for success. Specific recommendations for others planning multisite research are related to team-building, team membership, communication, behavioral norms, role flexibility, resources, feedback, problem management, and shared recognition.
Chlan, Linda; Guttormson, Jill; Tracy, Mary Fran; Bremer, Karin Lindstrom
2009-01-01
Although enrolling a sufficient number of participants is a challenge for any multisite clinical trial, recruiting patients who are critically ill and receiving mechanical ventilatory support presents additional challenges because of the severity of the patients’ illness and the impediments to their communication. Recruitment challenges related to the research sites, nursing staff, and research participants faced in the first 2 years of a 4-year multisite clinical trial of a patient-directed music intervention for managing anxiety in the intensive care unit were determined. Strategies to overcome these challenges, and thereby increase enrollment, were devised. Individual strategies, such as timing of screening on a unit, were tailored to each participating site to enhance recruitment for this trial. Other strategies, such as obtaining a waiver for a participant’s signature, were instituted across all participating sites. Through implementation of these various strategies, the mean monthly enrollment of participants increased by 50%. Investigators are advised to plan well in advance of starting recruitment for a clinical trial based in an intensive care unit, anticipate peaks and valleys in recruitment, and be proactive in addressing issues creatively as the issues arise. PMID:19723861
Music-induced emotions can be predicted from a combination of brain activity and acoustic features.
Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J
2015-12-01
It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01). Copyright © 2015 Elsevier Inc. All rights reserved.
Complex network approach to classifying classical piano compositions
NASA Astrophysics Data System (ADS)
Xin, Chen; Zhang, Huishu; Huang, Jiping
2016-10-01
Complex network has been regarded as a useful tool handling systems with vague interactions. Hence, numerous applications have arised. In this paper we construct complex networks for 770 classical piano compositions of Mozart, Beethoven and Chopin based on musical note pitches and lengths. We find prominent distinctions among network edges of different composers. Some stylized facts can be explained by such parameters of network structures and topologies. Further, we propose two classification methods for music styles and genres according to the discovered distinctions. These methods are easy to implement and the results are sound. This work suggests that complex network could be a decent way to analyze the characteristics of musical notes, since it could provide a deep view into understanding of the relationships among notes in musical compositions and evidence for classification of different composers, styles and genres of music.
Menstrual cycle phase alters women's sexual preferences for composers of more complex music.
Charlton, Benjamin D
2014-06-07
Over 140 years ago Charles Darwin first argued that birdsong and human music, having no clear survival benefit, were obvious candidates for sexual selection. Whereas the first contention is now universally accepted, his theory that music is a product of sexual selection through mate choice has largely been neglected. Here, I provide the first, to my knowledge, empirical support for the sexual selection hypothesis of music evolution by showing that women have sexual preferences during peak conception times for men that are able to create more complex music. Two-alternative forced-choice experiments revealed that woman only preferred composers of more complex music as short-term sexual partners when conception risk was highest. No preferences were displayed when women chose which composer they would prefer as a long-term partner in a committed relationship, and control experiments failed to reveal an effect of conception risk on women's preferences for visual artists. These results suggest that women may acquire genetic benefits for offspring by selecting musicians able to create more complex music as sexual partners, and provide compelling support for Darwin's assertion 'that musical notes and rhythm were first acquired by the male or female progenitors of mankind for the sake of charming the opposite sex'.
Kohlberg, Gavriel D.; Mancuso, Dean M.; Chari, Divya A.; Lalwani, Anil K.
2015-01-01
Objective. Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Methods. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Results. Compared to the original song, modified versions containing only 1–3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Conclusions. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience. PMID:26543322
NASA Astrophysics Data System (ADS)
Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng
2015-10-01
The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.
When music is salty: The crossmodal associations between sound and taste.
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.
Composing Music with Complex Networks
NASA Astrophysics Data System (ADS)
Liu, Xiaofan; Tse, Chi K.; Small, Michael
In this paper we study the network structure in music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurrences. We analyze sample compositions from Bach, Mozart, Chopin, as well as other types of music including Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. Power-law exponents of degree distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be created by using a biased random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. The newly created music from complex networks will be played in the presentation.
Implementation Considerations for Multisite Clinical Trials with Cognitive Neuroscience Tasks
Keefe, Richard S. E.; Harvey, Philip D.
2008-01-01
Multisite clinical trials aimed at cognitive enhancement across various neuropsychiatric conditions have employed standard neuropsychological tests as outcome measures. While these tests have enjoyed wide clinical use and have proven reliable and predictive of functional disability, a number of implementation challenges have arisen when these tests are used in clinical trials. These issues are likely to be magnified in future studies when cognitive neuroscience (CN) procedures are explored in these trials, because in their current forms CN procedures are less standardized and more difficult to teach and monitor. For multisite trials, we anticipate that the most challenging issues will include assuring tester competence, monitoring tester performance, specific challenges with complex assessment methods, and having resources available for adequate monitoring of data quality. Suggestions for overcoming these implementation challenges are offered. PMID:18495645
Influence of generalized complexity of a musical event on subjective time estimation.
Bueno, José Lino Oliveira; Firmino, Erico Artioli; Engelman, Arno
2002-04-01
This study examined the variations in the apparent duration of music events produced by differences in their generalized compositional complexity. Stimuli were the first 90 sec. of Gustav Mahler's 3rd Movement of Symphony No. 2 (low complexity) and the first 90 sec. of Luciano Bério's 3rd Movement of Symphony for Eight Voices and Orchestra (high complexity). Bério's symphony is another "reading" of Mahler's. On the compositional base of Mahler's symphony, Bério explored complexity in several musical elements--temporal (i.e., rhythm), nontemporal (i.e., pitch, orchestral and vocal timbre, texture, density), and verbal (i.e., text, words, phonemes). These two somewhat differently filled durations were reproduced by 10 women and 6 men with a stopwatch under the prospective paradigm. Analysis showed that the more generalized complexity of the musical event was followed by greater subjective estimation of the duration of this 90-sec. symphonic excerpt.
Enculturation Effects in Music Cognition: The Role of Age and Music Complexity
ERIC Educational Resources Information Center
Morrison, Steven J.; Demorest, Steven M.; Stambaugh, Laura A.
2008-01-01
The authors replicate and extend findings from previous studies of music enculturation by comparing music memory performance of children to that of adults when listening to culturally familiar and unfamiliar music. Forty-three children and 50 adults, all born and raised in the United States, completed a music memory test comprising unfamiliar…
Music Education: Cultural Values, Social Change and Innovation
ERIC Educational Resources Information Center
Walker, Robert
2007-01-01
This is an important work that addresses the complex issues surrounding musical meaning and experience, and the Western traditional justification for including music in education. The chapters in this volume examine the important subjects of tradition, innovation, social change, the music curriculum, music in the twentieth century, social strata,…
Encountering Complexity: Native Musics in the Curriculum.
ERIC Educational Resources Information Center
Boyea, Andrea
1999-01-01
Describes Native American musics, focusing on issues such as music and the experience of time, metaphor and metaphorical aspects, and spirituality and sounds from nature. Discusses Native American metaphysics and its reflection in the musics. States that an effective curriculum would provide a new receptivity to Native American musics. (CMK)
Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework
NASA Astrophysics Data System (ADS)
Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.
2018-01-01
Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).
Music, Policy, and Place-Centered Education: Finding Space for Adaptability
ERIC Educational Resources Information Center
Schmidt, Patrick K.
2012-01-01
As a volatile educative space, musical education must be interwoven with other concerns and other more encompassing constructs if it is to build robust, meaningful, and complex learning outcomes. This paper attempts to do this by placing music education and a complex understanding of policy side by side, and outlining what people can learn from…
Misattribution of musical arousal increases sexual attraction towards opposite-sex faces in females.
Marin, Manuela M; Schober, Raphaela; Gingras, Bruno; Leder, Helmut
2017-01-01
Several theories about the origins of music have emphasized its biological and social functions, including in courtship. Music may act as a courtship display due to its capacity to vary in complexity and emotional content. Support for music's reproductive function comes from the recent finding that only women in the fertile phase of the reproductive cycle prefer composers of complex melodies to composers of simple ones as short-term sexual partners, which is also in line with the ovulatory shift hypothesis. However, the precise mechanisms by which music may influence sexual attraction are unknown, specifically how music may interact with visual attractiveness cues and affect perception and behaviour in both genders. Using a crossmodal priming paradigm, we examined whether listening to music influences ratings of facial attractiveness and dating desirability of opposite-sex faces. We also tested whether misattribution of arousal or pleasantness underlies these effects, and explored whether sex differences and menstrual cycle phase may be moderators. Our sample comprised 64 women in the fertile or infertile phase (no hormonal contraception use) and 32 men, carefully matched for mood, relationship status, and musical preferences. Musical primes (25 s) varied in arousal and pleasantness, and targets were photos of faces with neutral expressions (2 s). Group-wise analyses indicated that women, but not men, gave significantly higher ratings of facial attractiveness and dating desirability after having listened to music than in the silent control condition. High-arousing, complex music yielded the largest effects, suggesting that music may affect human courtship behaviour through induced arousal, which calls for further studies on the mechanisms by which music affects sexual attraction in real-life social contexts.
Vuust, Peter; Witek, Maria A. G.
2014-01-01
Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813
When music is salty: The crossmodal associations between sound and taste
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population. PMID:28355227
Why Chinese People Play Western Classical Music: Transcultural Roots of Music Philosophy
ERIC Educational Resources Information Center
Huang, Hao
2012-01-01
This paper addresses the complex relationship between Confucian values and music education in East Asia, particularly its history in China. How does one account for the present "cultural fever" of Western classical music that has infected more than 100 million Chinese practitioners? It is proposed that Western classical music finds…
Marin, Manuela M.; Leder, Helmut
2013-01-01
Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne’s collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is relevant for the discussion on which aspects of complexity are domain-specific and which are domain-general. PMID:23977295
Marin, Manuela M; Leder, Helmut
2013-01-01
Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne's collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is relevant for the discussion on which aspects of complexity are domain-specific and which are domain-general.
Misattribution of musical arousal increases sexual attraction towards opposite-sex faces in females
Schober, Raphaela; Gingras, Bruno; Leder, Helmut
2017-01-01
Several theories about the origins of music have emphasized its biological and social functions, including in courtship. Music may act as a courtship display due to its capacity to vary in complexity and emotional content. Support for music’s reproductive function comes from the recent finding that only women in the fertile phase of the reproductive cycle prefer composers of complex melodies to composers of simple ones as short-term sexual partners, which is also in line with the ovulatory shift hypothesis. However, the precise mechanisms by which music may influence sexual attraction are unknown, specifically how music may interact with visual attractiveness cues and affect perception and behaviour in both genders. Using a crossmodal priming paradigm, we examined whether listening to music influences ratings of facial attractiveness and dating desirability of opposite-sex faces. We also tested whether misattribution of arousal or pleasantness underlies these effects, and explored whether sex differences and menstrual cycle phase may be moderators. Our sample comprised 64 women in the fertile or infertile phase (no hormonal contraception use) and 32 men, carefully matched for mood, relationship status, and musical preferences. Musical primes (25 s) varied in arousal and pleasantness, and targets were photos of faces with neutral expressions (2 s). Group-wise analyses indicated that women, but not men, gave significantly higher ratings of facial attractiveness and dating desirability after having listened to music than in the silent control condition. High-arousing, complex music yielded the largest effects, suggesting that music may affect human courtship behaviour through induced arousal, which calls for further studies on the mechanisms by which music affects sexual attraction in real-life social contexts. PMID:28892486
Scaramouche Goes to Preschool: The Complex Matrix of Young Children's Everyday Music
ERIC Educational Resources Information Center
Ilari, Beatriz
2018-01-01
This article examines everyday musical practices and their connections to young children's learning and development, in and through music. It begins with a discussion of music learning in early childhood as a form of participation and levels of intention in learning. Next, conceptions of child that have dominated early childhood music education…
ERIC Educational Resources Information Center
Bell, Adam Patrick
2017-01-01
What does it mean to experience disability in music? Based on interviews with Patrick Anderson--arguably the greatest wheelchair basketball player of all time--this article presents insights into the complexities of the experience of disability in sports and music. Contrasted with music education's tendency to adhere to a medicalized model of…
Cortical systems associated with covert music rehearsal.
Langheim, Frederick J P; Callicott, Joseph H; Mattay, Venkata S; Duyn, Jeff H; Weinberger, Daniel R
2002-08-01
Musical representation and overt music production are necessarily complex cognitive phenomena. While overt musical performance may be observed and studied, the act of performance itself necessarily skews results toward the importance of primary sensorimotor and auditory cortices. However, imagined musical performance (IMP) represents a complex behavioral task involving components suited to exploring the physiological underpinnings of musical cognition in music performance without the sensorimotor and auditory confounds of overt performance. We mapped the blood oxygenation level-dependent fMRI activation response associated with IMP in experienced musicians independent of the piece imagined. IMP consistently activated supplementary motor and premotor areas, right superior parietal lobule, right inferior frontal gyrus, bilateral mid-frontal gyri, and bilateral lateral cerebellum in contrast with rest, in a manner distinct from fingertapping versus rest and passive listening to the same piece versus rest. These data implicate an associative network independent of primary sensorimotor and auditory activity, likely representing the cortical elements most intimately linked to music production.
New Learning of Music after Bilateral Medial Temporal Lobe Damage: Evidence from an Amnesic Patient
Valtonen, Jussi; Gregory, Emma; Landau, Barbara; McCloskey, Michael
2014-01-01
Damage to the hippocampus impairs the ability to acquire new declarative memories, but not the ability to learn simple motor tasks. An unresolved question is whether hippocampal damage affects learning for music performance, which requires motor processes, but in a cognitively complex context. We studied learning of novel musical pieces by sight-reading in a newly identified amnesic, LSJ, who was a skilled amateur violist prior to contracting herpes simplex encephalitis. LSJ has suffered virtually complete destruction of the hippocampus bilaterally, as well as extensive damage to other medial temporal lobe structures and the left anterior temporal lobe. Because of LSJ’s rare combination of musical training and near-complete hippocampal destruction, her case provides a unique opportunity to investigate the role of the hippocampus for complex motor learning processes specifically related to music performance. Three novel pieces of viola music were composed and closely matched for factors contributing to a piece’s musical complexity. LSJ practiced playing two of the pieces, one in each of the two sessions during the same day. Relative to a third unpracticed control piece, LSJ showed significant pre- to post-training improvement for the two practiced pieces. Learning effects were observed both with detailed analyses of correctly played notes, and with subjective whole-piece performance evaluations by string instrument players. The learning effects were evident immediately after practice and 14 days later. The observed learning stands in sharp contrast to LSJ’s complete lack of awareness that the same pieces were being presented repeatedly, and to the profound impairments she exhibits in other learning tasks. Although learning in simple motor tasks has been previously observed in amnesic patients, our results demonstrate that non-hippocampal structures can support complex learning of novel musical sequences for music performance. PMID:25232312
Instrumentational complexity of music genres and why simplicity sells.
Percino, Gamaliel; Klimek, Peter; Thurner, Stefan
2014-01-01
Listening habits are strongly influenced by two opposing aspects, the desire for variety and the demand for uniformity in music. In this work we quantify these two notions in terms of instrumentation and production technologies that are typically involved in crafting popular music. We assign an 'instrumentational complexity value' to each music style. Styles of low instrumentational complexity tend to have generic instrumentations that can also be found in many other styles. Styles of high complexity, on the other hand, are characterized by a large variety of instruments that can only be found in a small number of other styles. To model these results we propose a simple stochastic model that explicitly takes the capabilities of artists into account. We find empirical evidence that individual styles show dramatic changes in their instrumentational complexity over the last fifty years. 'New wave' or 'disco' quickly climbed towards higher complexity in the 70s and fell back to low complexity levels shortly afterwards, whereas styles like 'folk rock' remained at constant high instrumentational complexity levels. We show that changes in the instrumentational complexity of a style are related to its number of sales and to the number of artists contributing to that style. As a style attracts a growing number of artists, its instrumentational variety usually increases. At the same time the instrumentational uniformity of a style decreases, i.e. a unique stylistic and increasingly complex expression pattern emerges. In contrast, album sales of a given style typically increase with decreasing instrumentational complexity. This can be interpreted as music becoming increasingly formulaic in terms of instrumentation once commercial or mainstream success sets in.
Neural underpinnings of music: the polyrhythmic brain.
Vuust, Peter; Gebauer, Line K; Witek, Maria A G
2014-01-01
Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has the remarkable ability to move our minds and bodies. Why do certain rhythms make us want to tap our feet, bop our heads or even get up and dance? And how does the brain process rhythmically complex rhythms during our experiences of music? In this chapter, we describe some common forms of rhythmic complexity in music and propose that the theory of predictive coding can explain how rhythm and rhythmic complexity are processed in the brain. We also consider how this theory may reveal why we feel so compelled by rhythmic tension in music. First, musical-theoretical and neuroscientific frameworks of rhythm are presented, in which rhythm perception is conceptualized as an interaction between what is heard ('rhythm') and the brain's anticipatory structuring of music ('the meter'). Second, three different examples of tension between rhythm and meter in music are described: syncopation, polyrhythm and groove. Third, we present the theory of predictive coding of music, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain's Bayesian minimization of the error between the input to the brain and the brain's prior expectations. Fourth, empirical studies of neural and behavioral effects of syncopation, polyrhythm and groove will be reported, and we propose how these studies can be seen as special cases of the predictive coding theory. Finally, we argue that musical rhythm exploits the brain's general principles of anticipation and propose that pleasure from musical rhythm may be a result of such anticipatory mechanisms.
Klotz, Sebastian
2008-09-01
The study of acoustics, harmonics and of music has been providing scientific models since Greek Antiquity. Since the early modern ages, two separate cultures began to emerge out of the study of music: a technical acoustics and an aesthetically and philosophically inspired musical criticism. In the writings of Johann Friedrich Herbart (1811) a scientific approach to musical aesthetics and to music perception is taking shape that reinstalls the listening process as a highly complex and logical phenomenon. By opening music for a scientific psychological investigation, Herbart pioneered the physiologically and acoustically grounded seminal work by Hermann von Helmholtz On the sensations of tone (1863) which the author considered a prerequisite for musical aesthetics and music theory. Helmholtz in turn inspired the philosopher and psychologist Carl Stumpf to further investigate musical perception (beginning in 1883). To Stumpf, it provided a paradigm for experimental psychology as mental functions and phenomena could be studied in detail. These functions and phenomena are the actual objects of scientific study in Stumpf's inductive and descriptive psychology. Combining insights from statistics, ethnology, anthropology, psychoacoustics and the cultural history of mankind, Stumpf and his team developed a new blend of science which absorbs styles of reasoning, analytical procedures and academic convictions from natural history, the natural sciences and the humanities but at the same time identifies shortcomings of these approaches that fail to grasp the complexities of psychic functions. Despite their reliance on the quasi-objective phonograph and despite their commitment to objectivity, precision and measurement, mental phenomena relating to tonal perception and to music provided too complex a challenge to be easily articulated and shared by the scientific community after 1900. The essay illustrates these tensions against the background of a history of objectivity.
Max Roach's Adventures in Higher Music Education.
ERIC Educational Resources Information Center
Hentoff, Nat
1980-01-01
Max Roach and the author discuss Roach's efforts to gain recognition of the complexity and importance of American musical forms, particularly jazz, by American university music departments. In addition, Roach describes his approach to marketing his music, an approach which avoids the economic exploitation often suffered by American jazz musicians.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Mora K.; Hiemstra, T; Machesky, Michael L.
2012-01-01
The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3 11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Sternmore » layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (110) rutile surface (Zhang et al., 2004b). TheMDsimulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.« less
NASA Astrophysics Data System (ADS)
Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.
2012-10-01
The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.
Impact of Noise Reduction Algorithm in Cochlear Implant Processing on Music Enjoyment.
Kohlberg, Gavriel D; Mancuso, Dean M; Griffin, Brianna M; Spitzer, Jaclyn B; Lalwani, Anil K
2016-06-01
Noise reduction algorithm (NRA) in speech processing strategy has positive impact on speech perception among cochlear implant (CI) listeners. We sought to evaluate the effect of NRA on music enjoyment. Prospective analysis of music enjoyment. Academic medical center. Normal-hearing (NH) adults (N = 16) and CI listeners (N = 9). Subjective rating of music excerpts. NH and CI listeners evaluated country music piece on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version and 20 modified, less complex versions created by including subsets of musical instruments from the original song. NH participants listened to the segments through CI simulation and CI listeners listened to the segments with their usual speech processing strategy, with and without NRA. Decreasing the number of instruments was significantly associated with increase in the pleasantness and naturalness in both NH and CI subjects (p < 0.05). However, there was no difference in music enjoyment with or without NRA for either NH listeners with CI simulation or CI listeners across all three modalities of pleasantness, musicality, and naturalness (p > 0.05): this was true for the original and the modified music segments with one to three instruments (p > 0.05). NRA does not affect music enjoyment in CI listener or NH individual with CI simulation. This suggests that strategies to enhance speech processing will not necessarily have a positive impact on music enjoyment. However, reducing the complexity of music shows promise in enhancing music enjoyment and should be further explored.
Lelo-de-Larrea-Mancera, E Sebastian; Rodríguez-Agudelo, Yaneth; Solís-Vivanco, Rodolfo
2017-06-01
Music represents a complex form of human cognition. To what extent our auditory system is attuned to music is yet to be clearly understood. Our principal aim was to determine whether the neurophysiological operations underlying pre-attentive auditory change detection (N1 enhancement (N1e)/Mismatch Negativity (MMN)) and the subsequent involuntary attentional reallocation (P3a) towards infrequent sound omissions, are influenced by differences in musical content. Specifically, we intended to explore any interaction effects that rhythmic and pitch dimensions of musical organization may have over these processes. Results showed that both the N1e and MMN amplitudes were differentially influenced by rhythm and pitch dimensions. MMN latencies were shorter for musical structures containing both features. This suggests some neurocognitive independence between pitch and rhythm domains, but also calls for further address on possible interactions between both of them at the level of early, automatic auditory detection. Furthermore, results demonstrate that the N1e reflects basic sensory memory processes. Lastly, we show that the involuntary switch of attention associated with the P3a reflects a general-purpose mechanism not modulated by musical features. Altogether, the N1e/MMN/P3a complex elicited by infrequent sound omissions revealed evidence of musical influence over early stages of auditory perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Instrumentational Complexity of Music Genres and Why Simplicity Sells
Percino, Gamaliel; Klimek, Peter; Thurner, Stefan
2014-01-01
Listening habits are strongly influenced by two opposing aspects, the desire for variety and the demand for uniformity in music. In this work we quantify these two notions in terms of instrumentation and production technologies that are typically involved in crafting popular music. We assign an ‘instrumentational complexity value’ to each music style. Styles of low instrumentational complexity tend to have generic instrumentations that can also be found in many other styles. Styles of high complexity, on the other hand, are characterized by a large variety of instruments that can only be found in a small number of other styles. To model these results we propose a simple stochastic model that explicitly takes the capabilities of artists into account. We find empirical evidence that individual styles show dramatic changes in their instrumentational complexity over the last fifty years. ‘New wave’ or ‘disco’ quickly climbed towards higher complexity in the 70s and fell back to low complexity levels shortly afterwards, whereas styles like ‘folk rock’ remained at constant high instrumentational complexity levels. We show that changes in the instrumentational complexity of a style are related to its number of sales and to the number of artists contributing to that style. As a style attracts a growing number of artists, its instrumentational variety usually increases. At the same time the instrumentational uniformity of a style decreases, i.e. a unique stylistic and increasingly complex expression pattern emerges. In contrast, album sales of a given style typically increase with decreasing instrumentational complexity. This can be interpreted as music becoming increasingly formulaic in terms of instrumentation once commercial or mainstream success sets in. PMID:25551631
Alteration of complex negative emotions induced by music in euthymic patients with bipolar disorder.
Choppin, Sabine; Trost, Wiebke; Dondaine, Thibaut; Millet, Bruno; Drapier, Dominique; Vérin, Marc; Robert, Gabriel; Grandjean, Didier
2016-02-01
Research has shown bipolar disorder to be characterized by dysregulation of emotion processing, including biases in facial expression recognition that is most prevalent during depressive and manic states. Very few studies have examined induced emotions when patients are in a euthymic phase, and there has been no research on complex emotions. We therefore set out to test emotional hyperreactivity in response to musical excerpts inducing complex emotions in bipolar disorder during euthymia. We recruited 21 patients with bipolar disorder (BD) in a euthymic phase and 21 matched healthy controls. Participants first rated their emotional reactivity on two validated self-report scales (ERS and MAThyS). They then rated their music-induced emotions on nine continuous scales. The targeted emotions were wonder, power, melancholy and tension. We used a specific generalized linear mixed model to analyze the behavioral data. We found that participants in the euthymic bipolar group experienced more intense complex negative emotions than controls when the musical excerpts induced wonder. Moreover, patients exhibited greater emotional reactivity in daily life (ERS). Finally, a greater experience of tension while listening to positive music seemed to be mediated by greater emotional reactivity and a deficit in executive functions. The heterogeneity of the BD group in terms of clinical characteristics may have influenced the results. Euthymic patients with bipolar disorder exhibit more complex negative emotions than controls in response to positive music. Copyright © 2015 Elsevier B.V. All rights reserved.
Incidental Learning of Melodic Structure of North Indian Music
ERIC Educational Resources Information Center
Rohrmeier, Martin; Widdess, Richard
2017-01-01
Musical knowledge is largely implicit. It is acquired without awareness of its complex rules, through interaction with a large number of samples during musical enculturation. Whereas several studies explored implicit learning of mostly abstract and less ecologically valid features of Western music, very little work has been done with respect to…
Do Students See Themselves in the Music Curriculum? A Project to Encourage Inclusion
ERIC Educational Resources Information Center
Peters, Gretchen
2016-01-01
Greater diversity and inclusivity in music curricula are goals that permeate discussions in education. Achieving these goals, however, is complex and presents inherent difficulties as music not only values tradition but also promotes the past. Many music traditions were exclusionary and unintentionally cultivated a culture in which many people…
Learning Tunes: Pop Music in the Classroom
ERIC Educational Resources Information Center
Moore, David Cooper
2011-01-01
Popular music can be a rich and engaging teaching tool. Children often intuitively gain complex layers of experiential meaning more readily from music than other forms of plot-driven media where their narrow focus on literal interpretation and plot recitation can bring thoughtful conversation to a halt. However, integrating popular music in the…
Kotchoubey, Boris; Pavlov, Yuri G; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Kotchoubey, Boris; Pavlov, Yuri G.; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients’ self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation. PMID:26640445
Infants long-term memory for complex music
NASA Astrophysics Data System (ADS)
Ilari, Beatriz; Polka, Linda; Costa-Giomi, Eugenia
2002-05-01
In this study we examined infants' long-term memory for two complex pieces of music. A group of thirty 7.5 month-old infants was exposed daily to one short piano piece (i.e., either the Prelude or the Forlane by Maurice Ravel) for ten consecutive days. Following the 10-day exposure period there was a two-week retention period in which no exposure to the piece occurred. After the retention period, infants were tested on the Headturn Preference Procedure. At test, 8 different excerpts of the familiar piece were mixed with 8 different foil excerpts of the unfamiliar one. Infants showed a significant preference for the familiar piece of music. A control group of fifteen nonexposed infants was also tested and showed no preferences for either piece of music. These results suggest that infants in the exposure group retained the familiar music in their long-term memory. This was demonstrated by their ability to discriminate between the different excerpts of both the familiar and the unfamiliar pieces of music, and by their preference for the familiar piece. Confirming previous findings (Jusczyk and Hohne, 1993; Saffran et al., 2000), in this study we suggest that infants can retain complex pieces of music in their long-term memory for two weeks.
Principles of structure building in music, language and animal song
Rohrmeier, Martin; Zuidema, Willem; Wiggins, Geraint A.; Scharff, Constance
2015-01-01
Human language, music and a variety of animal vocalizations constitute ways of sonic communication that exhibit remarkable structural complexity. While the complexities of language and possible parallels in animal communication have been discussed intensively, reflections on the complexity of music and animal song, and their comparisons, are underrepresented. In some ways, music and animal songs are more comparable to each other than to language as propositional semantics cannot be used as indicator of communicative success or wellformedness, and notions of grammaticality are less easily defined. This review brings together accounts of the principles of structure building in music and animal song. It relates them to corresponding models in formal language theory, the extended Chomsky hierarchy (CH), and their probabilistic counterparts. We further discuss common misunderstandings and shortcomings concerning the CH and suggest ways to move beyond. We discuss language, music and animal song in the context of their function and motivation and further integrate problems and issues that are less commonly addressed in the context of language, including continuous event spaces, features of sound and timbre, representation of temporality and interactions of multiple parallel feature streams. We discuss these aspects in the light of recent theoretical, cognitive, neuroscientific and modelling research in the domains of music, language and animal song. PMID:25646520
ERIC Educational Resources Information Center
McPhee, Alastair; Stollery, Peter; McMillan, Ros
2005-01-01
For some time there has been debate about differing perspectives on musical gift and musical intelligence. One view is that musical gift is innate: that it is present in certain individuals from birth and that the task of the teacher is to develop the potential which is there. A second view is that musical gift is a complex concept which includes…
Creating Time: Social Collaboration in Music Improvisation.
Walton, Ashley E; Washburn, Auriel; Langland-Hassan, Peter; Chemero, Anthony; Kloos, Heidi; Richardson, Michael J
2018-01-01
Musical collaboration emerges from the complex interaction of environmental and informational constraints, including those of the instruments and the performance context. Music improvisation in particular is more like everyday interaction in that dynamics emerge spontaneously without a rehearsed score or script. We examined how the structure of the musical context affords and shapes interactions between improvising musicians. Six pairs of professional piano players improvised with two different backing tracks while we recorded both the music produced and the movements of their heads, left arms, and right arms. The backing tracks varied in rhythmic and harmonic information, from a chord progression to a continuous drone. Differences in movement coordination and playing behavior were evaluated using the mathematical tools of complex dynamical systems, with the aim of uncovering the multiscale dynamics that characterize musical collaboration. Collectively, the findings indicated that each backing track afforded the emergence of different patterns of coordination with respect to how the musicians played together, how they moved together, as well as their experience collaborating with each other. Additionally, listeners' experiences of the music when rating audio recordings of the improvised performances were related to the way the musicians coordinated both their playing behavior and their bodily movements. Accordingly, the study revealed how complex dynamical systems methods (namely recurrence analysis) can capture the turn-taking dynamics that characterized both the social exchange of the music improvisation and the sounds of collaboration more generally. The study also demonstrated how musical improvisation provides a way of understanding how social interaction emerges from the structure of the behavioral task context. Copyright © 2017 Cognitive Science Society, Inc.
Musical Preferences as a Function of Stimulus Complexity of Piano Jazz
ERIC Educational Resources Information Center
Gordon, Josh; Gridley, Mark C.
2013-01-01
Seven excerpts of modern jazz piano improvisations were selected to represent a range of perceived complexities. Audio recordings of the excerpts were played for 27 listeners who were asked to indicate their level of enjoyment on 7-point scales. Indications of enjoyment followed an inverted-U when plotted against perceived complexity of the music.…
Fractal structure enables temporal prediction in music.
Rankin, Summer K; Fink, Philip W; Large, Edward W
2014-10-01
1/f serial correlations and statistical self-similarity (fractal structure) have been measured in various dimensions of musical compositions. Musical performances also display 1/f properties in expressive tempo fluctuations, and listeners predict tempo changes when synchronizing. Here the authors show that the 1/f structure is sufficient for listeners to predict the onset times of upcoming musical events. These results reveal what information listeners use to anticipate events in complex, non-isochronous acoustic rhythms, and this will entail innovative models of temporal synchronization. This finding could improve therapies for Parkinson's and related disorders and inform deeper understanding of how endogenous neural rhythms anticipate events in complex, temporally structured communication signals.
Teaching Materials and Strategies for the AP Music Theory Exam
ERIC Educational Resources Information Center
Lively, Michael T.
2017-01-01
Each year, many students take the Advanced Placement (AP) Music Theory Exam, and the majority of these students enroll in specialized AP music theory classes as part of the preparation process. For the teachers of these AP music theory classes, a number of challenges are presented by the difficulty and complexity of the exam subject material as…
ERIC Educational Resources Information Center
Forrester, Sommer H.
2018-01-01
The purpose of this study was to examine the complexities of instrumental music teacher knowledge as they relate to the intersection between instrumental music teaching and conducting, and to explore how participants describe and perceive these intersections. The key research question guiding this study was, How do high school instrumental music…
Cultural Diversity and the Formation of Identity: Our Role as Music Teachers
ERIC Educational Resources Information Center
Fitzpatrick, Kate R.
2012-01-01
This article encourages music teachers to consider the complexity of their students' cultural identities and the role these identities play in the formation of students' self-concept. The musical heritage students bring to the classroom may provide a rich foundation of experience for teaching and learning music. Readers are challenged to consider…
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829
Markov Chain Analysis of Musical Dice Games
NASA Astrophysics Data System (ADS)
Volchenkov, D.; Dawin, J. R.
2012-07-01
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
NASA Astrophysics Data System (ADS)
Volchenkov, Dima; Dawin, Jean René
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Alexander, Ashlin J; Bartel, Lee; Friesen, Lendra; Shipp, David; Chen, Joseph
2011-02-01
Cochlear implants (CIs) allow many profoundly deaf individuals to regain speech understanding. However, the ability to understand speech does not necessarily guarantee music enjoyment. Enabling a CI user to recover the ability to perceive and enjoy the complexity of music remains a challenge determined by many factors. (1) To construct a novel, attention-based, diagnostic software tool (Music EAR) for the assessment of music enjoyment and perception and (2) to compare the results among three listener groups. Thirty-six subjects completed the Music EAR assessment tool: 12 normal-hearing musicians (NHMs), 12 normal-hearing nonmusicians (NHnMs), and 12 CI listeners. Subjects were required to (1) rate enjoyment of musical excerpts at three complexity levels; (2) differentiate five instrumental timbres; (3) recognize pitch pattern variation; and (4) identify target musical patterns embedded holistically in a melody. Enjoyment scores for CI users were comparable to those for NHMs and superior to those for NHnMs and revealed that implantees enjoyed classical music most. CI users performed significantly poorer in all categories of music perception compared to normal-hearing listeners. Overall CI user scores were lowest in those tasks requiring increased attention. Two high-performing subjects matched or outperformed NHnMs in pitch and timbre perception tasks. The Music EAR assessment tool provides a unique approach to the measurement of music perception and enjoyment in CI users. Together with auditory training evidence, the results provide considerable hope for further recovery of music appreciation through methodical rehabilitation.
The Enhanced Musical Rhythmic Perception in Second Language Learners
Roncaglia-Denissen, M. Paula; Roor, Drikus A.; Chen, Ao; Sadakata, Makiko
2016-01-01
Previous research suggests that mastering languages with distinct rather than similar rhythmic properties enhances musical rhythmic perception. This study investigates whether learning a second language (L2) contributes to enhanced musical rhythmic perception in general, regardless of first and second languages rhythmic properties. Additionally, we investigated whether this perceptual enhancement could be alternatively explained by exposure to musical rhythmic complexity, such as the use of compound meter in Turkish music. Finally, it investigates if an enhancement of musical rhythmic perception could be observed among L2 learners whose first language relies heavily on pitch information, as is the case with tonal languages. Therefore, we tested Turkish, Dutch and Mandarin L2 learners of English and Turkish monolinguals on their musical rhythmic perception. Participants’ phonological and working memory capacities, melodic aptitude, years of formal musical training and daily exposure to music were assessed to account for cultural and individual differences which could impact their rhythmic ability. Our results suggest that mastering a L2 rather than exposure to musical rhythmic complexity could explain individuals’ enhanced musical rhythmic perception. An even stronger enhancement of musical rhythmic perception was observed for L2 learners whose first and second languages differ regarding their rhythmic properties, as enhanced performance of Turkish in comparison with Dutch L2 learners of English seem to suggest. Such a stronger enhancement of rhythmic perception seems to be found even among L2 learners whose first language relies heavily on pitch information, as the performance of Mandarin L2 learners of English indicates. Our findings provide further support for a cognitive transfer between the language and music domain. PMID:27375469
Principles of structure building in music, language and animal song.
Rohrmeier, Martin; Zuidema, Willem; Wiggins, Geraint A; Scharff, Constance
2015-03-19
Human language, music and a variety of animal vocalizations constitute ways of sonic communication that exhibit remarkable structural complexity. While the complexities of language and possible parallels in animal communication have been discussed intensively, reflections on the complexity of music and animal song, and their comparisons, are underrepresented. In some ways, music and animal songs are more comparable to each other than to language as propositional semantics cannot be used as indicator of communicative success or wellformedness, and notions of grammaticality are less easily defined. This review brings together accounts of the principles of structure building in music and animal song. It relates them to corresponding models in formal language theory, the extended Chomsky hierarchy (CH), and their probabilistic counterparts. We further discuss common misunderstandings and shortcomings concerning the CH and suggest ways to move beyond. We discuss language, music and animal song in the context of their function and motivation and further integrate problems and issues that are less commonly addressed in the context of language, including continuous event spaces, features of sound and timbre, representation of temporality and interactions of multiple parallel feature streams. We discuss these aspects in the light of recent theoretical, cognitive, neuroscientific and modelling research in the domains of music, language and animal song. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Music as the Representative of the World Picture, the Phenomenon of Culture
ERIC Educational Resources Information Center
Kossanova, Aigul Sh.; Yermanov, Zhanat R.; Bekenova, Aizhan S.; Julmukhamedova, Aizhan A.; Takezhanova, Roza Ph.; Zhussupova, Saule S.
2016-01-01
The purpose of this article is to the study of music as a representative of the picture of the world nomadic culture. With a systemic organization, rich expressive means, music reflects the diversity of the world in its complex, subtle and profound manifestations being the artistic value, key world modeling element. Music can satisfy the aesthetic…
Syncopation, Body-Movement and Pleasure in Groove Music
Witek, Maria A. G.; Clarke, Eric F.; Wallentin, Mikkel; Kringelbach, Morten L.; Vuust, Peter
2014-01-01
Moving to music is an essential human pleasure particularly related to musical groove. Structurally, music associated with groove is often characterised by rhythmic complexity in the form of syncopation, frequently observed in musical styles such as funk, hip-hop and electronic dance music. Structural complexity has been related to positive affect in music more broadly, but the function of syncopation in eliciting pleasure and body-movement in groove is unknown. Here we report results from a web-based survey which investigated the relationship between syncopation and ratings of wanting to move and experienced pleasure. Participants heard funk drum-breaks with varying degrees of syncopation and audio entropy, and rated the extent to which the drum-breaks made them want to move and how much pleasure they experienced. While entropy was found to be a poor predictor of wanting to move and pleasure, the results showed that medium degrees of syncopation elicited the most desire to move and the most pleasure, particularly for participants who enjoy dancing to music. Hence, there is an inverted U-shaped relationship between syncopation, body-movement and pleasure, and syncopation seems to be an important structural factor in embodied and affective responses to groove. PMID:24740381
Syncopation, body-movement and pleasure in groove music.
Witek, Maria A G; Clarke, Eric F; Wallentin, Mikkel; Kringelbach, Morten L; Vuust, Peter
2014-01-01
Moving to music is an essential human pleasure particularly related to musical groove. Structurally, music associated with groove is often characterised by rhythmic complexity in the form of syncopation, frequently observed in musical styles such as funk, hip-hop and electronic dance music. Structural complexity has been related to positive affect in music more broadly, but the function of syncopation in eliciting pleasure and body-movement in groove is unknown. Here we report results from a web-based survey which investigated the relationship between syncopation and ratings of wanting to move and experienced pleasure. Participants heard funk drum-breaks with varying degrees of syncopation and audio entropy, and rated the extent to which the drum-breaks made them want to move and how much pleasure they experienced. While entropy was found to be a poor predictor of wanting to move and pleasure, the results showed that medium degrees of syncopation elicited the most desire to move and the most pleasure, particularly for participants who enjoy dancing to music. Hence, there is an inverted U-shaped relationship between syncopation, body-movement and pleasure, and syncopation seems to be an important structural factor in embodied and affective responses to groove.
Looking at Music Through a Business Prism: The Anatomy of a Black Company
ERIC Educational Resources Information Center
Byrd, Donald
1977-01-01
Discusses how "the operation of a small independent music production company takes place within a complex environment. Intense forces, both positive and negative, interact in a unique way in the development of a black music company." (Author/JM)
There’s More to Groove than Bass in Electronic Dance Music: Why Some People Won’t Dance to Techno
2016-01-01
The purpose of this study was to explore the relationship between audio descriptors for groove-based electronic dance music (EDM) and raters’ perceived cognitive, affective, and psychomotor responses. From 198 musical excerpts (length: 15 sec.) representing 11 subgenres of EDM, 19 low-level audio feature descriptors were extracted. A principal component analysis of the feature vectors indicated that the musical excerpts could effectively be classified using five complex measures, describing the rhythmical properties of: (a) the high-frequency band, (b) the mid-frequency band, and (c) the low-frequency band, as well as overall fluctuations in (d) dynamics, and (e) timbres. Using these five complex audio measures, four meaningful clusters of the EDM excerpts emerged with distinct musical attributes comprising music with: (a) isochronous bass and static timbres, (b) isochronous bass with fluctuating dynamics and rhythmical variations in the mid-frequency range, (c) non-isochronous bass and fluctuating timbres, and (d) non-isochronous bass with rhythmical variations in the high frequencies. Raters (N = 99) were each asked to respond to four musical excerpts using a four point Likert-Type scale consisting of items representing cognitive (n = 9), affective (n = 9), and psychomotor (n = 3) domains. Musical excerpts falling under the cluster of “non-isochronous bass with rhythmical variations in the high frequencies” demonstrated the overall highest composite scores as evaluated by the raters. Musical samples falling under the cluster of “isochronous bass with static timbres” demonstrated the overall lowest composite scores as evaluated by the raters. Moreover, music preference was shown to significantly affect the systematic patterning of raters’ responses for those with a musical preference for “contemporary” music, “sophisticated” music, and “intense” music. PMID:27798645
There's More to Groove than Bass in Electronic Dance Music: Why Some People Won't Dance to Techno.
Wesolowski, Brian C; Hofmann, Alex
2016-01-01
The purpose of this study was to explore the relationship between audio descriptors for groove-based electronic dance music (EDM) and raters' perceived cognitive, affective, and psychomotor responses. From 198 musical excerpts (length: 15 sec.) representing 11 subgenres of EDM, 19 low-level audio feature descriptors were extracted. A principal component analysis of the feature vectors indicated that the musical excerpts could effectively be classified using five complex measures, describing the rhythmical properties of: (a) the high-frequency band, (b) the mid-frequency band, and (c) the low-frequency band, as well as overall fluctuations in (d) dynamics, and (e) timbres. Using these five complex audio measures, four meaningful clusters of the EDM excerpts emerged with distinct musical attributes comprising music with: (a) isochronous bass and static timbres, (b) isochronous bass with fluctuating dynamics and rhythmical variations in the mid-frequency range, (c) non-isochronous bass and fluctuating timbres, and (d) non-isochronous bass with rhythmical variations in the high frequencies. Raters (N = 99) were each asked to respond to four musical excerpts using a four point Likert-Type scale consisting of items representing cognitive (n = 9), affective (n = 9), and psychomotor (n = 3) domains. Musical excerpts falling under the cluster of "non-isochronous bass with rhythmical variations in the high frequencies" demonstrated the overall highest composite scores as evaluated by the raters. Musical samples falling under the cluster of "isochronous bass with static timbres" demonstrated the overall lowest composite scores as evaluated by the raters. Moreover, music preference was shown to significantly affect the systematic patterning of raters' responses for those with a musical preference for "contemporary" music, "sophisticated" music, and "intense" music.
Chirico, Alice; Serino, Silvia; Cipresso, Pietro; Gaggioli, Andrea; Riva, Giuseppe
2015-01-01
It is not unusual to experience a sense of total absorption, concentration, action-awareness, distortion of time and intrinsic enjoyment during an activity that involves music. Indeed, it is noted that there is a special relationship between these two aspects (i.e., music and flow experience). In order to deeply explore flow in the musical domain, it is crucial to consider the complexity of the flow experience—both as a “state” and as a “trait.” Secondly, since music is a multifaceted domain, it is necessary to concentrate on specific music settings, such as (i) musical composition; (ii) listening; and (iii) musical performance. To address these issues, the current review aims to outline flow experience as a “trait” and as a “state” in the three above-mentioned musical domains. Clear and useful guidelines to distinguish between flow as a “state” and as a “trait” are provided by literature concerning flow assessment. For this purpose, three aspects of the selected studies are discussed and analyzed: (i) the characteristics of the flow assessments used; (ii) the experimental design; (iii) the results; and (iv) the interrelations between the three domains. Results showed that the dispositional approach is predominant in the above-mentioned settings, mainly regarding music performance. Several aspects concerning musical contexts still need to be deeply analyzed. Future challenges could include the role of a group level of analysis, overcoming a frequency approach toward dispositional flow, and integrating both state and dispositional flow perspectives in order to deepen comprehension of how flow takes place in musical contexts. Finally, to explain the complex relationship between these two phenomena, we suggest that music and flow could be seen as an emergent embodied system. PMID:26175709
The do re mi's of everyday life: the structure and personality correlates of music preferences.
Rentfrow, Peter J; Gosling, Samuel D
2003-06-01
The present research examined individual differences in music preferences. A series of 6 studies investigated lay beliefs about music, the structure underlying music preferences, and the links between music preferences and personality. The data indicated that people consider music an important aspect of their lives and listening to music an activity they engaged in frequently. Using multiple samples, methods, and geographic regions, analyses of the music preferences of over 3,500 individuals converged to reveal 4 music-preference dimensions: Reflective and Complex, Intense and Rebellious, Upbeat and Conventional, and Energetic and Rhythmic. Preferences for these music dimensions were related to a wide array of personality dimensions (e.g., Openness), self-views (e.g., political orientation), and cognitive abilities (e.g., verbal IQ).
Mixed mechanisms of multi-site phosphorylation
Suwanmajo, Thapanar; Krishnan, J.
2015-01-01
Multi-site phosphorylation is ubiquitous in cell biology and has been widely studied experimentally and theoretically. The underlying chemical modification mechanisms are typically assumed to be distributive or processive. In this paper, we study the behaviour of mixed mechanisms that can arise either because phosphorylation and dephosphorylation involve different mechanisms or because phosphorylation and/or dephosphorylation can occur through a combination of mechanisms. We examine a hierarchy of models to assess chemical information processing through different mixed mechanisms, using simulations, bifurcation analysis and analytical work. We demonstrate how mixed mechanisms can show important and unintuitive differences from pure distributive and processive mechanisms, in some cases resulting in monostable behaviour with simple dose–response behaviour, while in other cases generating new behaviour-like oscillations. Our results also suggest patterns of information processing that are relevant as the number of modification sites increases. Overall, our work creates a framework to examine information processing arising from complexities of multi-site modification mechanisms and their impact on signal transduction. PMID:25972433
Transmission Heterogeneity and Autoinoculation in a Multisite Infection Model of HPV
Brouwer, Andrew F.; Meza, Rafael; Eisenberg, Marisa C.
2015-01-01
The human papillomavirus (HPV) is sexually transmitted and can infect oral, genital, and anal sites in the human epithelium. Here, we develop a multisite transmission model that includes autoinoculation, to study HPV and other multisite diseases. Under a homogeneous-contacts assumption, we analyze the basic reproduction number R0, as well as type and target reproduction numbers, for a two-site model. In particular, we find that R0 occupies a space between taking the maximum of next generation matrix terms for same site transmission and taking the geometric average of cross-site transmission terms in such a way that heterogeneity in the same-site transmission rates increases R0 while heterogeneity in the cross-site transmission decreases it. Additionally, autoinoculation adds considerable complexity to the form of R0. We extend this analysis to a heterosexual population, which additionally yields dynamics analogous to those of vector–host models. We also examine how these issues of heterogeneity may affect disease control, using type and target reproduction numbers. PMID:26518265
RFWD3-Dependent Ubiquitination of RPA Regulates Repair at Stalled Replication Forks.
Elia, Andrew E H; Wang, David C; Willis, Nicholas A; Boardman, Alexander P; Hajdu, Ildiko; Adeyemi, Richard O; Lowry, Elizabeth; Gygi, Steven P; Scully, Ralph; Elledge, Stephen J
2015-10-15
We have used quantitative proteomics to profile ubiquitination in the DNA damage response (DDR). We demonstrate that RPA, which functions as a protein scaffold in the replication stress response, is multiply ubiquitinated upon replication fork stalling. Ubiquitination of RPA occurs on chromatin, involves sites outside its DNA binding channel, does not cause proteasomal degradation, and increases under conditions of fork collapse, suggesting a role in repair at stalled forks. We demonstrate that the E3 ligase RFWD3 mediates RPA ubiquitination. RFWD3 is necessary for replication fork restart, normal repair kinetics during replication stress, and homologous recombination (HR) at stalled replication forks. Mutational analysis suggests that multisite ubiquitination of the entire RPA complex is responsible for repair at stalled forks. Multisite protein group sumoylation is known to promote HR in yeast. Our findings reveal a similar requirement for multisite protein group ubiquitination during HR at stalled forks in mammalian cells. Copyright © 2015 Elsevier Inc. All rights reserved.
Creative Activities in Music--A Genome-Wide Linkage Analysis.
Oikkonen, Jaana; Kuusi, Tuire; Peltonen, Petri; Raijas, Pirre; Ukkola-Vuoti, Liisa; Karma, Kai; Onkamo, Päivi; Järvelä, Irma
2016-01-01
Creative activities in music represent a complex cognitive function of the human brain, whose biological basis is largely unknown. In order to elucidate the biological background of creative activities in music we performed genome-wide linkage and linkage disequilibrium (LD) scans in musically experienced individuals characterised for self-reported composing, arranging and non-music related creativity. The participants consisted of 474 individuals from 79 families, and 103 sporadic individuals. We found promising evidence for linkage at 16p12.1-q12.1 for arranging (LOD 2.75, 120 cases), 4q22.1 for composing (LOD 2.15, 103 cases) and Xp11.23 for non-music related creativity (LOD 2.50, 259 cases). Surprisingly, statistically significant evidence for linkage was found for the opposite phenotype of creative activity in music (neither composing nor arranging; NCNA) at 18q21 (LOD 3.09, 149 cases), which contains cadherin genes like CDH7 and CDH19. The locus at 4q22.1 overlaps the previously identified region of musical aptitude, music perception and performance giving further support for this region as a candidate region for broad range of music-related traits. The other regions at 18q21 and 16p12.1-q12.1 are also adjacent to the previously identified loci with musical aptitude. Pathway analysis of the genes suggestively associated with composing suggested an overrepresentation of the cerebellar long-term depression pathway (LTD), which is a cellular model for synaptic plasticity. The LTD also includes cadherins and AMPA receptors, whose component GSG1L was linked to arranging. These results suggest that molecular pathways linked to memory and learning via LTD affect music-related creative behaviour. Musical creativity is a complex phenotype where a common background with musicality and intelligence has been proposed. Here, we implicate genetic regions affecting music-related creative behaviour, which also include genes with neuropsychiatric associations. We also propose a common genetic background for music-related creative behaviour and musical abilities at chromosome 4.
Music cognition and the cognitive sciences.
Pearce, Marcus; Rohrmeier, Martin
2012-10-01
Why should music be of interest to cognitive scientists, and what role does it play in human cognition? We review three factors that make music an important topic for cognitive scientific research. First, music is a universal human trait fulfilling crucial roles in everyday life. Second, music has an important part to play in ontogenetic development and human evolution. Third, appreciating and producing music simultaneously engage many complex perceptual, cognitive, and emotional processes, rendering music an ideal object for studying the mind. We propose an integrated status for music cognition in the Cognitive Sciences and conclude by reviewing challenges and big questions in the field and the way in which these reflect recent developments. Copyright © 2012 Cognitive Science Society, Inc.
[Suicidality and musical preferences: a possible link?].
Mikolajczak, Gladys; Desseilles, Martin
2012-01-01
Music is an important part of young people's lives. In this article, we attempt to answer two questions on the links between music et suicide. First, we examine if certain types of music favor suicidal process (ideation and acting out); and, secondly, we examine if music can constitute a tool to reduce the risk of suicide. Several factors possibly involved in links between musical preferences and the suicidal process are developed: the Velten effect and the musical mood induction procedure, the identification and the learning by imitation, the media influence as well as the individual characteristics. A multifactor approach is necessary to understand the complex and birectional links that unite musical preferences and suicide risk.
Walker, Robrina; Morris, David W; Greer, Tracy L; Trivedi, Madhukar H
2014-01-01
Descriptions of and recommendations for meeting the challenges of training research staff for multisite studies are limited despite the recognized importance of training on trial outcomes. The STRIDE (STimulant Reduction Intervention using Dosed Exercise) study is a multisite randomized clinical trial that was conducted at nine addiction treatment programs across the United States within the National Drug Abuse Treatment Clinical Trials Network (CTN) and evaluated the addition of exercise to addiction treatment as usual (TAU), compared to health education added to TAU, for individuals with stimulant abuse or dependence. Research staff administered a variety of measures that required a range of interviewing, technical, and clinical skills. In order to address the absence of information on how research staff are trained for multisite clinical studies, the current manuscript describes the conceptual process of training and certifying research assistants for STRIDE. Training was conducted using a three-stage process to allow staff sufficient time for distributive learning, practice, and calibration leading up to implementation of this complex study. Training was successfully implemented with staff across nine sites. Staff demonstrated evidence of study and procedural knowledge via quizzes and skill demonstration on six measures requiring certification. Overall, while the majority of staff had little to no experience in the six measures, all research assistants demonstrated ability to correctly and reliably administer the measures throughout the study. Practical recommendations are provided for training research staff and are particularly applicable to the challenges encountered with large, multisite trials.
Aubol, Brandon E.; Adams, Joseph A.
2011-01-01
To investigate how a protein kinase interacts with its protein substrate during extended, multi-site phosphorylation, the kinetic mechanism of a protein kinase involved in mRNA splicing control was investigated using rapid quench flow techniques. The protein kinase SRPK1 phosphorylates approximately 10 serines in the arginine-serine-rich domain (RS domain) of the SR protein SRSF1 in a C-to-N-terminal direction, a modification that directs this essential splicing factor from the cytoplasm to the nucleus. Transient-state kinetic experiments illustrate that the first phosphate is added rapidly onto the RS domain of SRSF1 (t1/2 = 0.1 sec) followed by slower, multi-site phosphorylation at the remaining serines (t1/2 = 15 sec). Mutagenesis experiments suggest that efficient phosphorylation rates are maintained by an extensive hydrogen bonding and electrostatic network between the RS domain of the SR protein and the active site and docking groove of the kinase. Catalytic trapping and viscosometric experiments demonstrate that while the phosphoryl transfer step is fast, ADP release limits multi-site phosphorylation. By studying phosphate incorporation into selectively pre-phosphorylated forms of the enzyme-substrate complex, the kinetic mechanism for site-specific phosphorylation along the reaction coordinate was assessed. The binding affinity of the SR protein, the phosphoryl transfer rate and ADP exchange rate were found to decline significantly as a function of progressive phosphorylation in the RS domain. These findings indicate that the protein substrate actively modulates initiation, extension and termination events associated with prolonged, multi-site phosphorylation. PMID:21728354
Sparse array angle estimation using reduced-dimension ESPRIT-MUSIC in MIMO radar.
Zhang, Chaozhu; Pang, Yucai
2013-01-01
Sparse linear arrays provide better performance than the filled linear arrays in terms of angle estimation and resolution with reduced size and low cost. However, they are subject to manifold ambiguity. In this paper, both the transmit array and receive array are sparse linear arrays in the bistatic MIMO radar. Firstly, we present an ESPRIT-MUSIC method in which ESPRIT algorithm is used to obtain ambiguous angle estimates. The disambiguation algorithm uses MUSIC-based procedure to identify the true direction cosine estimate from a set of ambiguous candidate estimates. The paired transmit angle and receive angle can be estimated and the manifold ambiguity can be solved. However, the proposed algorithm has high computational complexity due to the requirement of two-dimension search. Further, the Reduced-Dimension ESPRIT-MUSIC (RD-ESPRIT-MUSIC) is proposed to reduce the complexity of the algorithm. And the RD-ESPRIT-MUSIC only demands one-dimension search. Simulation results demonstrate the effectiveness of the method.
The influence of music on mental effort and driving performance.
Ünal, Ayça Berfu; Steg, Linda; Epstude, Kai
2012-09-01
The current research examined the influence of loud music on driving performance, and whether mental effort mediated this effect. Participants (N=69) drove in a driving simulator either with or without listening to music. In order to test whether music would have similar effects on driving performance in different situations, we manipulated the simulated traffic environment such that the driving context consisted of both complex and monotonous driving situations. In addition, we systematically kept track of drivers' mental load by making the participants verbally report their mental effort at certain moments while driving. We found that listening to music increased mental effort while driving, irrespective of the driving situation being complex or monotonous, providing support to the general assumption that music can be a distracting auditory stimulus while driving. However, drivers who listened to music performed as well as the drivers who did not listen to music, indicating that music did not impair their driving performance. Importantly, the increases in mental effort while listening to music pointed out that drivers try to regulate their mental effort as a cognitive compensatory strategy to deal with task demands. Interestingly, we observed significant improvements in driving performance in two of the driving situations. It seems like mental effort might mediate the effect of music on driving performance in situations requiring sustained attention. Other process variables, such as arousal and boredom, should also be incorporated to study designs in order to reveal more on the nature of how music affects driving. Copyright © 2012 Elsevier Ltd. All rights reserved.
Synchronization in human musical rhythms and mutually interacting complex systems
Hennig, Holger
2014-01-01
Though the music produced by an ensemble is influenced by multiple factors, including musical genre, musician skill, and individual interpretation, rhythmic synchronization is at the foundation of musical interaction. Here, we study the statistical nature of the mutual interaction between two humans synchronizing rhythms. We find that the interbeat intervals of both laypeople and professional musicians exhibit scale-free (power law) cross-correlations. Surprisingly, the next beat to be played by one person is dependent on the entire history of the other person’s interbeat intervals on timescales up to several minutes. To understand this finding, we propose a general stochastic model for mutually interacting complex systems, which suggests a physiologically motivated explanation for the occurrence of scale-free cross-correlations. We show that the observed long-term memory phenomenon in rhythmic synchronization can be imitated by fractal coupling of separately recorded or synthesized audio tracks and thus applied in electronic music. Though this study provides an understanding of fundamental characteristics of timing and synchronization at the interbrain level, the mutually interacting complex systems model may also be applied to study the dynamics of other complex systems where scale-free cross-correlations have been observed, including econophysics, physiological time series, and collective behavior of animal flocks. PMID:25114228
Music therapy applied to complex blast injury in interdisciplinary care: a case report.
Vaudreuil, Rebecca; Avila, Luis; Bradt, Joke; Pasquina, Paul
2018-04-24
Music therapy has a long history of treating the physiological, psychological, and neurological injuries of war. Recently, there has been an increase in the use of music therapy and other creative arts therapies in the care of combat injured service members returning to the United States from Iraq and Afghanistan, especially those with complex blast-related injuries. This case report describes the role of music therapy in the interdisciplinary rehabilitation of a severely injured service member. Music therapy was provided as stand-alone treatment and in co-treatment with speech language pathology, physical therapy, and occupational therapy. The report is based on clinical notes, self-reports by the patient and his wife, and interviews with rehabilitation team members. In collaboration with other treatment disciplines, music therapy contributed to improvements in range of motion, functional use of bilateral upper extremities, strength endurance, breath support, articulation, task-attention, compensatory strategies, social integration, quality of life, and overall motivation in the recovery process. The inclusion of music therapy in rehabilitation was highly valued by the patient, his family, and the treatment team. Music therapy has optimized the rehabilitation of a service member through assisting the recovery process on a continuum from clinic to community. Implications for Rehabilitation Music therapy in stand-alone sessions and in co-treatment with traditional disciplines can enhance treatment outcomes in functional domains of motor, speech, cognition, social integration, and quality of life for military populations. Music therapists can help ease discomfort and difficulty associated with rehabilitation activities, thereby enhancing patient motivation and participation in interdisciplinary care. Music therapy assists treatment processes from clinic to community, making it highly valued by the patient, family, and interdisciplinary team members in military healthcare. Music therapy provides a platform to prevent social isolation by promoting community integration through music performance.
Einarsson, Anna; Ziemke, Tom
2017-01-01
The question motivating the work presented here, starting from a view of music as embodied and situated activity, is how can we account for the complexity of interactive music performance situations. These are situations in which human performers interact with responsive technologies, such as sensor-driven technology or sound synthesis affected by analysis of the performed sound signal. This requires investigating in detail the underlying mechanisms, but also providing a more holistic approach that does not lose track of the complex whole constituted by the interactions and relationships of composers, performers, audience, technologies, etc. The concept of affordances has frequently been invoked in musical research, which has seen a " bodily turn " in recent years, similar to the development of the embodied cognition approach in the cognitive sciences. We therefore begin by broadly delineating its usage in the cognitive sciences in general, and in music research in particular. We argue that what is still missing in the discourse on musical affordances is an encompassing theoretical framework incorporating the sociocultural dimensions that are fundamental to the situatedness and embodiment of interactive music performance and composition. We further argue that the cultural affordances framework, proposed by Rietveld and Kiverstein (2014) and recently articulated further by Ramstead et al. (2016) in this journal, although not previously applied to music, constitutes a promising starting point. It captures and elucidates this complex web of relationships in terms of shared landscapes and individual fields of affordances. We illustrate this with examples foremost from the first author's artistic work as composer and performer of interactive music. This sheds new light on musical composition as a process of construction-and embodied mental simulation-of situations, guiding the performers' and audience's attention in shifting fields of affordances. More generally, we believe that the theoretical perspectives and concrete examples discussed in this paper help to elucidate how situations-and with them affordances-are dynamically constructed through the interactions of various mechanisms as people engage in embodied and situated activity.
Evaluating musical instruments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, D. Murray
Scientific measurements of sound generation and radiation by musical instruments are surprisingly hard to correlate with the subtle and complex judgments of instrumental quality made by expert musicians.
Sound Richness of Music Might Be Mediated by Color Perception: A PET Study.
Satoh, Masayuki; Nagata, Ken; Tomimoto, Hidekazu
2015-01-01
We investigated the role of the fusiform cortex in music processing with the use of PET, focusing on the perception of sound richness. Musically naïve subjects listened to familiar melodies with three kinds of accompaniments: (i) an accompaniment composed of only three basic chords (chord condition), (ii) a simple accompaniment typically used in traditional music text books in elementary school (simple condition), and (iii) an accompaniment with rich and flowery sounds composed by a professional composer (complex condition). Using a PET subtraction technique, we studied changes in regional cerebral blood flow (rCBF) in simple minus chord, complex minus simple, and complex minus chord conditions. The simple minus chord, complex minus simple, and complex minus chord conditions regularly showed increases in rCBF at the posterior portion of the inferior temporal gyrus, including the LOC and fusiform gyrus. We may conclude that certain association cortices such as the LOC and the fusiform cortex may represent centers of multisensory integration, with foreground and background segregation occurring at the LOC level and the recognition of richness and floweriness of stimuli occurring in the fusiform cortex, both in terms of vision and audition.
A Design Principle for an Autonomous Post-translational Pattern Formation.
Sugai, Shuhei S; Ode, Koji L; Ueda, Hiroki R
2017-04-25
Previous autonomous pattern-formation models often assumed complex molecular and cellular networks. This theoretical study, however, shows that a system composed of one substrate with multisite phosphorylation and a pair of kinase and phosphatase can generate autonomous spatial information, including complex stripe patterns. All (de-)phosphorylation reactions are described with a generic Michaelis-Menten scheme, and all species freely diffuse without pre-existing gradients. Computational simulation upon >23,000,000 randomly generated parameter sets revealed the design motifs of cyclic reaction and enzyme sequestration by slow-diffusing substrates. These motifs constitute short-range positive and long-range negative feedback loops to induce Turing instability. The width and height of spatial patterns can be controlled independently by distinct reaction-diffusion processes. Therefore, multisite reversible post-translational modification can be a ubiquitous source for various patterns without requiring other complex regulations such as autocatalytic regulation of enzymes and is applicable to molecular mechanisms for inducing subcellular localization of proteins driven by post-translational modifications. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Kinson, Rochelle Melina; Lim, Wen Phei; Rahman, Habeebul
2015-01-01
Musical hallucinations are a rare phenomenon that renders appropriate identification and treatment a challenge. This case series describes three women who presented with hearing complex, familiar melodies in the absence of external stimuli on a background of hearing impairment.
ERIC Educational Resources Information Center
Rajan, Rekha S.
2010-01-01
Providing opportunity for musical exploration is essential to any early childhood program. Through music making, children are actively engaged with their senses: they listen to the complex sounds around them, move their bodies to the rhythms, and touch and feel the textures and shapes of the instruments. The inimitable strength of the Montessori…
A comparative phylogenetic study of genetics and folk music.
Pamjav, Horolma; Juhász, Zoltán; Zalán, Andrea; Németh, Endre; Damdin, Bayarlkhagva
2012-04-01
Computer-aided comparison of folk music from different nations is one of the newest research areas. We were intrigued to have identified some important similarities between phylogenetic studies and modern folk music. First of all, both of them use similar concepts and representation tools such as multidimensional scaling for modelling relationship between populations. This gave us the idea to investigate whether these connections are merely accidental or if they mirror population migrations from the past. We raised the question; does the complex structure of musical connections display a clear picture and can this system be interpreted by the genetic analysis? This study is the first to systematically investigate the incidental genetic background of the folk music context between different populations. Paternal (42 populations) and maternal lineages (56 populations) were compared based on Fst genetic distances of the Y chromosomal and mtDNA haplogroup frequencies. To test this hypothesis, the corresponding musical cultures were also compared using an automatic overlap analysis of parallel melody styles for 31 Eurasian nations. We found that close musical relations of populations indicate close genetic distances (<0.05) with a probability of 82%. It was observed that there is a significant correlation between population genetics and folk music; maternal lineages have a more important role in folk music traditions than paternal lineages. Furthermore, the combination of these disciplines establishing a new interdisciplinary research field of "music-genetics" can be an efficient tool to get a more comprehensive picture on the complex behaviour of populations in prehistoric time.
Technological, biological, and acoustical constraints to music perception in cochlear implant users.
Limb, Charles J; Roy, Alexis T
2014-02-01
Despite advances in technology, the ability to perceive music remains limited for many cochlear implant users. This paper reviews the technological, biological, and acoustical constraints that make music an especially challenging stimulus for cochlear implant users, while highlighting recent research efforts to overcome these shortcomings. The limitations of cochlear implant devices, which have been optimized for speech comprehension, become evident when applied to music, particularly with regards to inadequate spectral, fine-temporal, and dynamic range representation. Beyond the impoverished information transmitted by the device itself, both peripheral and central auditory nervous system deficits are seen in the presence of sensorineural hearing loss, such as auditory nerve degeneration and abnormal auditory cortex activation. These technological and biological constraints to effective music perception are further compounded by the complexity of the acoustical features of music itself that require the perceptual integration of varying rhythmic, melodic, harmonic, and timbral elements of sound. Cochlear implant users not only have difficulty perceiving spectral components individually (leading to fundamental disruptions in perception of pitch, melody, and harmony) but also display deficits with higher perceptual integration tasks required for music perception, such as auditory stream segregation. Despite these current limitations, focused musical training programs, new assessment methods, and improvements in the representation and transmission of the complex acoustical features of music through technological innovation offer the potential for significant advancements in cochlear implant-mediated music perception. Copyright © 2013 Elsevier B.V. All rights reserved.
The biology and evolution of music: a comparative perspective.
Fitch, W Tecumseh
2006-05-01
Studies of the biology of music (as of language) are highly interdisciplinary and demand the integration of diverse strands of evidence. In this paper, I present a comparative perspective on the biology and evolution of music, stressing the value of comparisons both with human language, and with those animal communication systems traditionally termed "song". A comparison of the "design features" of music with those of language reveals substantial overlap, along with some important differences. Most of these differences appear to stem from semantic, rather than structural, factors, suggesting a shared formal core of music and language. I next review various animal communication systems that appear related to human music, either by analogy (bird and whale "song") or potential homology (great ape bimanual drumming). A crucial comparative distinction is between learned, complex signals (like language, music and birdsong) and unlearned signals (like laughter, ape calls, or bird calls). While human vocalizations clearly build upon an acoustic and emotional foundation shared with other primates and mammals, vocal learning has evolved independently in our species since our divergence with chimpanzees. The convergent evolution of vocal learning in other species offers a powerful window into psychological and neural constraints influencing the evolution of complex signaling systems (including both song and speech), while ape drumming presents a fascinating potential homology with human instrumental music. I next discuss the archeological data relevant to music evolution, concluding on the basis of prehistoric bone flutes that instrumental music is at least 40,000 years old, and perhaps much older. I end with a brief review of adaptive functions proposed for music, concluding that no one selective force (e.g., sexual selection) is adequate to explaining all aspects of human music. I suggest that questions about the past function of music are unlikely to be answered definitively and are thus a poor choice as a research focus for biomusicology. In contrast, a comparative approach to music promises rich dividends for our future understanding of the biology and evolution of music.
Direction Finding in the Presence of Complex Electro-Magnetic Environment.
1995-06-29
compiling adversely affects the resolution capabilities of the MUSIC algorithm. A technique utilizing the terminal impedance matrix is devised to...performance of the MUSIC algorithm is also investigated.Interference power, as little as 15dB below the signal power from the near field scatterer greatly...reduces.the resolution capabilities of the MUSIC algorithm. A new away configuration is devised to suppress the interference. Modification of the MUSIC
ERIC Educational Resources Information Center
Moats, Stacie; Poxon, Stephanie
2011-01-01
It seems each generation of young people finds new ways to express life's complex emotions and experiences through music. As a favored outlet for self-expression, music also provides future generations with a fascinating historical record. Sound recordings and sheet music of once popular songs offer unique opportunities for students to analyze…
James, Clara E.; Oechslin, Mathias S.; Michel, Christoph M.; De Pretto, Michael
2017-01-01
This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300–500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior. PMID:29163017
James, Clara E; Oechslin, Mathias S; Michel, Christoph M; De Pretto, Michael
2017-01-01
This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300-500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior.
Teaching and Learning Music Composition in Primary School Settings
ERIC Educational Resources Information Center
Saetre, Jon Helge
2011-01-01
The field of creative music education is described as complex, unpredictable and versatile. The aim of this article is to explore this complexity and diversity on the basis of the continental "didaktik" tradition and Robin Alexander's generic model of teaching. The educational practice and orientation of the teacher are investigated in relation to…
How does the brain process music?
Warren, Jason
2008-02-01
The organisation of the musical brain is a major focus of interest in contemporary neuroscience. This reflects the increasing sophistication of tools (especially imaging techniques) to examine brain anatomy and function in health and disease, and the recognition that music provides unique insights into a number of aspects of nonverbal brain function. The emerging picture is complex but coherent, and moves beyond older ideas of music as the province of a single brain area or hemisphere to the concept of music as a 'whole-brain' phenomenon. Music engages a distributed set of cortical modules that process different perceptual, cognitive and emotional components with varying selectivity. 'Why' rather than 'how' the brain processes music is a key challenge for the future.
Increase in Synchronization of Autonomic Rhythms between Individuals When Listening to Music
Bernardi, Nicolò F.; Codrons, Erwan; di Leo, Rita; Vandoni, Matteo; Cavallaro, Filippo; Vita, Giuseppe; Bernardi, Luciano
2017-01-01
In light of theories postulating a role for music in forming emotional and social bonds, here we investigated whether endogenous rhythms synchronize between multiple individuals when listening to music. Cardiovascular and respiratory recordings were taken from multiple individuals (musically trained or music-naïve) simultaneously, at rest and during a live concert comprising music excerpts with varying degrees of complexity of the acoustic envelope. Inter-individual synchronization of cardiorespiratory rhythms showed a subtle but reliable increase during passively listening to music compared to baseline. The low-level auditory features of the music were largely responsible for creating or disrupting such synchronism, explaining ~80% of its variance, over and beyond subjective musical preferences and previous musical training. Listening to simple rhythms and melodies, which largely dominate the choice of music during rituals and mass events, brings individuals together in terms of their physiological rhythms, which could explain why music is widely used to favor social bonds. PMID:29089898
Becoming musically enculturated: effects of music classes for infants on brain and behavior.
Trainor, Laurel J; Marie, Céline; Gerry, David; Whiskin, Elaine; Unrau, Andrea
2012-04-01
Musical enculturation is a complex, multifaceted process that includes the development of perceptual processing specialized for the pitch and rhythmic structures of the musical system in the culture, understanding of esthetic and expressive norms, and learning the pragmatic uses of music in different social situations. Here, we summarize the results of a study in which 6-month-old Western infants were randomly assigned to 6 months of either an active participatory music class or a class in which they experienced music passively while playing. Active music participation resulted in earlier enculturation to Western tonal pitch structure, larger and/or earlier brain responses to musical tones, and a more positive social trajectory. Furthermore, the data suggest that early exposure to cultural norms of musical expression leads to early preferences for those norms. We conclude that musical enculturation begins in infancy and that active participatory music making in a positive social setting accelerates enculturation. © 2012 New York Academy of Sciences.
Heiderscheit, Annie; Breckenridge, Stephanie J; Chlan, Linda L; Savik, Kay
2014-01-01
Mechanical ventilation (MV) is a life-saving measure and supportive modality utilized to treat patients experiencing respiratory failure. Patients experience pain, discomfort, and anxiety as a result of being mechanically ventilated. Music listening is a non-pharmacological intervention used to manage these psychophysiological symptoms associated with mechanical ventilation. The purpose of this secondary analysis was to examine music preferences of 107 MV patients enrolled in a randomized clinical trial that implemented a patient-directed music listening protocol to help manage the psychophysiological symptom of anxiety. Music data presented includes the music genres and instrumentation patients identified as their preferred music. Genres preferred include: classical, jazz, rock, country, and oldies. Instrumentation preferred include: piano, voice, guitar, music with nature sounds, and orchestral music. Analysis of three patients' preferred music received throughout the course of the study is illustrated to demonstrate the complexity of assessing MV patients and the need for an ongoing assessment process.
Music preferences of mechanically ventilated patients participating in a randomized controlled trial
Heiderscheit, Annie; Breckenridge, Stephanie J.; Chlan, Linda L.; Savik, Kay
2014-01-01
Mechanical ventilation (MV) is a life-saving measure and supportive modality utilized to treat patients experiencing respiratory failure. Patients experience pain, discomfort, and anxiety as a result of being mechanically ventilated. Music listening is a non-pharmacological intervention used to manage these psychophysiological symptoms associated with mechanical ventilation. The purpose of this secondary analysis was to examine music preferences of 107 MV patients enrolled in a randomized clinical trial that implemented a patient-directed music listening protocol to help manage the psychophysiological symptom of anxiety. Music data presented includes the music genres and instrumentation patients identified as their preferred music. Genres preferred include: classical, jazz, rock, country, and oldies. Instrumentation preferred include: piano, voice, guitar, music with nature sounds, and orchestral music. Analysis of three patients’ preferred music received throughout the course of the study is illustrated to demonstrate the complexity of assessing MV patients and the need for an ongoing assessment process. PMID:25574992
Cortical Sensitivity to Guitar Note Patterns: EEG Entrainment to Repetition and Key.
Bridwell, David A; Leslie, Emily; McCoy, Dakarai Q; Plis, Sergey M; Calhoun, Vince D
2017-01-01
Music is ubiquitous throughout recent human culture, and many individual's have an innate ability to appreciate and understand music. Our appreciation of music likely emerges from the brain's ability to process a series of repeated complex acoustic patterns. In order to understand these processes further, cortical responses were measured to a series of guitar notes presented with a musical pattern or without a pattern. ERP responses to individual notes were measured using a 24 electrode Bluetooth mobile EEG system (Smarting mBrainTrain) while 13 healthy non-musicians listened to structured (i.e., within musical keys and with repetition) or random sequences of guitar notes for 10 min each. We demonstrate an increased amplitude to the ERP that appears ~200 ms to notes presented within the musical sequence. This amplitude difference between random notes and patterned notes likely reflects individual's cortical sensitivity to guitar note patterns. These amplitudes were compared to ERP responses to a rare note embedded within a stream of frequent notes to determine whether the sensitivity to complex musical structure overlaps with the sensitivity to simple irregularities reflected in traditional auditory oddball experiments. Response amplitudes to the negative peak at ~175 ms are statistically correlated with the mismatch negativity (MMN) response measured to a rare note presented among a series of frequent notes (i.e., in a traditional oddball sequence), but responses to the subsequent positive peak at ~200 do not show a statistical relationship with the P300 response. Thus, the sensitivity to musical structure identified to 4 Hz note patterns appears somewhat distinct from the sensitivity to statistical regularities reflected in the traditional "auditory oddball" sequence. Overall, we suggest that this is a promising approach to examine individual's sensitivity to complex acoustic patterns, which may overlap with higher level cognitive processes, including language.
Music enjoyment with cochlear implantation.
Prevoteau, Charlotte; Chen, Stephanie Y; Lalwani, Anil K
2018-10-01
Since the advent of cochlear implant (CI) surgery in the 1960s, there have been remarkable technological and surgical advances enabling excellent speech perception in quiet with many CI users able to use the telephone. However, many CI users struggle with music perception, particularly with the pitch-based and melodic elements of music. Yet remarkably, despite poor music perception, many CI users enjoy listening to music based on self-report questionnaires, and prospective studies have suggested a disassociation between music perception and enjoyment. Music enjoyment is arguably a more functional measure of one's listening experience, and thus enhancing one's listening experience is a worthy goal. Recent studies have shown that re-engineering music to reduce its complexity may enhance enjoyment in CI users and also delineate differences in musical preferences from normal hearing listeners. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-Variate EEG Analysis as a Novel Tool to Examine Brain Responses to Naturalistic Music Stimuli
Sturm, Irene; Dähne, Sven; Blankertz, Benjamin; Curio, Gabriel
2015-01-01
Note onsets in music are acoustic landmarks providing auditory cues that underlie the perception of more complex phenomena such as beat, rhythm, and meter. For naturalistic ongoing sounds a detailed view on the neural representation of onset structure is hard to obtain, since, typically, stimulus-related EEG signatures are derived by averaging a high number of identical stimulus presentations. Here, we propose a novel multivariate regression-based method extracting onset-related brain responses from the ongoing EEG. We analyse EEG recordings of nine subjects who passively listened to stimuli from various sound categories encompassing simple tone sequences, full-length romantic piano pieces and natural (non-music) soundscapes. The regression approach reduces the 61-channel EEG to one time course optimally reflecting note onsets. The neural signatures derived by this procedure indeed resemble canonical onset-related ERPs, such as the N1-P2 complex. This EEG projection was then utilized to determine the Cortico-Acoustic Correlation (CACor), a measure of synchronization between EEG signal and stimulus. We demonstrate that a significant CACor (i) can be detected in an individual listener's EEG of a single presentation of a full-length complex naturalistic music stimulus, and (ii) it co-varies with the stimuli’s average magnitudes of sharpness, spectral centroid, and rhythmic complexity. In particular, the subset of stimuli eliciting a strong CACor also produces strongly coordinated tension ratings obtained from an independent listener group in a separate behavioral experiment. Thus musical features that lead to a marked physiological reflection of tone onsets also contribute to perceived tension in music. PMID:26510120
Multi-Variate EEG Analysis as a Novel Tool to Examine Brain Responses to Naturalistic Music Stimuli.
Sturm, Irene; Dähne, Sven; Blankertz, Benjamin; Curio, Gabriel
2015-01-01
Note onsets in music are acoustic landmarks providing auditory cues that underlie the perception of more complex phenomena such as beat, rhythm, and meter. For naturalistic ongoing sounds a detailed view on the neural representation of onset structure is hard to obtain, since, typically, stimulus-related EEG signatures are derived by averaging a high number of identical stimulus presentations. Here, we propose a novel multivariate regression-based method extracting onset-related brain responses from the ongoing EEG. We analyse EEG recordings of nine subjects who passively listened to stimuli from various sound categories encompassing simple tone sequences, full-length romantic piano pieces and natural (non-music) soundscapes. The regression approach reduces the 61-channel EEG to one time course optimally reflecting note onsets. The neural signatures derived by this procedure indeed resemble canonical onset-related ERPs, such as the N1-P2 complex. This EEG projection was then utilized to determine the Cortico-Acoustic Correlation (CACor), a measure of synchronization between EEG signal and stimulus. We demonstrate that a significant CACor (i) can be detected in an individual listener's EEG of a single presentation of a full-length complex naturalistic music stimulus, and (ii) it co-varies with the stimuli's average magnitudes of sharpness, spectral centroid, and rhythmic complexity. In particular, the subset of stimuli eliciting a strong CACor also produces strongly coordinated tension ratings obtained from an independent listener group in a separate behavioral experiment. Thus musical features that lead to a marked physiological reflection of tone onsets also contribute to perceived tension in music.
Creative Activities in Music – A Genome-Wide Linkage Analysis
Oikkonen, Jaana; Kuusi, Tuire; Peltonen, Petri; Raijas, Pirre; Ukkola-Vuoti, Liisa; Karma, Kai; Onkamo, Päivi; Järvelä, Irma
2016-01-01
Creative activities in music represent a complex cognitive function of the human brain, whose biological basis is largely unknown. In order to elucidate the biological background of creative activities in music we performed genome-wide linkage and linkage disequilibrium (LD) scans in musically experienced individuals characterised for self-reported composing, arranging and non-music related creativity. The participants consisted of 474 individuals from 79 families, and 103 sporadic individuals. We found promising evidence for linkage at 16p12.1-q12.1 for arranging (LOD 2.75, 120 cases), 4q22.1 for composing (LOD 2.15, 103 cases) and Xp11.23 for non-music related creativity (LOD 2.50, 259 cases). Surprisingly, statistically significant evidence for linkage was found for the opposite phenotype of creative activity in music (neither composing nor arranging; NCNA) at 18q21 (LOD 3.09, 149 cases), which contains cadherin genes like CDH7 and CDH19. The locus at 4q22.1 overlaps the previously identified region of musical aptitude, music perception and performance giving further support for this region as a candidate region for broad range of music-related traits. The other regions at 18q21 and 16p12.1-q12.1 are also adjacent to the previously identified loci with musical aptitude. Pathway analysis of the genes suggestively associated with composing suggested an overrepresentation of the cerebellar long-term depression pathway (LTD), which is a cellular model for synaptic plasticity. The LTD also includes cadherins and AMPA receptors, whose component GSG1L was linked to arranging. These results suggest that molecular pathways linked to memory and learning via LTD affect music-related creative behaviour. Musical creativity is a complex phenotype where a common background with musicality and intelligence has been proposed. Here, we implicate genetic regions affecting music-related creative behaviour, which also include genes with neuropsychiatric associations. We also propose a common genetic background for music-related creative behaviour and musical abilities at chromosome 4. PMID:26909693
2013-01-01
Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126
Cognitive science and the cultural nature of music.
Cross, Ian
2012-10-01
The vast majority of experimental studies of music to date have explored music in terms of the processes involved in the perception and cognition of complex sonic patterns that can elicit emotion. This paper argues that this conception of music is at odds both with recent Western musical scholarship and with ethnomusicological models, and that it presents a partial and culture-specific representation of what may be a generic human capacity. It argues that the cognitive sciences must actively engage with the problems of exploring music as manifested and conceived in the broad spectrum of world cultures, not only to elucidate the diversity of music in mind but also to identify potential commonalities that could illuminate the relationships between music and other domains of thought and behavior. Copyright © 2012 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Garman, Barry R.; And Others
1991-01-01
Band, orchestra, and choir festival evaluations are a regular part of many secondary school music programs, and most such festivals engage adjudicators who rate each group's performance. Because music ensemble performance is complex and multi-dimensional, it does not lend itself readily to precise measurement; generally, musical performances are…
The Amusic Brain: In Tune, Out of Key, and Unaware
ERIC Educational Resources Information Center
Peretz, Isabelle; Brattico, Elvira; Jarvenpaa, Miika; Tervaniemi, Mari
2009-01-01
Like language, music engagement is universal, complex and present early in life. However, approximately 4% of the general population experiences a lifelong deficit in music perception that cannot be explained by hearing loss, brain damage, intellectual deficiencies or lack of exposure. This musical disorder, commonly known as tone-deafness and now…
Markov source model for printed music decoding
NASA Astrophysics Data System (ADS)
Kopec, Gary E.; Chou, Philip A.; Maltz, David A.
1995-03-01
This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.
The complex network of musical tastes
NASA Astrophysics Data System (ADS)
Buldú, Javier M.; Cano, P.; Koppenberger, M.; Almendral, Juan A.; Boccaletti, S.
2007-06-01
We present an empirical study of the evolution of a social network constructed under the influence of musical tastes. The network is obtained thanks to the selfless effort of a broad community of users who share playlists of their favourite songs with other users. When two songs co-occur in a playlist a link is created between them, leading to a complex network where songs are the fundamental nodes. In this representation, songs in the same playlist could belong to different musical genres, but they are prone to be linked by a certain musical taste (e.g. if songs A and B co-occur in several playlists, an user who likes A will probably like also B). Indeed, playlist collections such as the one under study are the basic material that feeds some commercial music recommendation engines. Since playlists have an input date, we are able to evaluate the topology of this particular complex network from scratch, observing how its characteristic parameters evolve in time. We compare our results with those obtained from an artificial network defined by means of a null model. This comparison yields some insight on the evolution and structure of such a network, which could be used as ground data for the development of proper models. Finally, we gather information that can be useful for the development of music recommendation engines and give some hints about how top-hits appear.
Artistic forms and complexity.
Boon, J-P; Casti, J; Taylor, R P
2011-04-01
We discuss the inter-relationship between various concepts of complexity by introducing a complexity 'triangle' featuring objective complexity, subjective complexity and social complexity. Their connections are explored using visual and musical compositions of art. As examples, we quantify the complexity embedded within the paintings of the Jackson Pollock and the musical works of Johann Sebastian Bach. We discuss the challenges inherent in comparisons of the spatial patterns created by Pollock and the sonic patterns created by Bach, including the differing roles that time plays in these investigations. Our results draw attention to some common intriguing characteristics suggesting 'universality' and conjecturing that the fractal nature of art might have an intrinsic value of more general significance.
Scanlon, Dennis P; Wolf, Laura J; Alexander, Jeffrey A; Christianson, Jon B; Greene, Jessica; Jean-Jacques, Muriel; McHugh, Megan; Shi, Yunfeng; Leitzell, Brigitt; Vanderbrink, Jocelyn M
2016-08-01
The Aligning Forces for Quality (AF4Q) initiative was the Robert Wood Johnson Foundation's (RWJF's) signature effort to increase the overall quality of healthcare in targeted communities throughout the country. In addition to sponsoring this 16-site complex program, RWJF funded an independent scientific evaluation to support objective research on the initiative's effectiveness and contributions to basic knowledge in 5 core programmatic areas. The research design, data, and challenges faced during the summative evaluation phase of this near decade-long program are discussed. A descriptive overview of the summative research design and its development for a multi-site, community-based, healthcare quality improvement initiative is provided. The summative research design employed by the evaluation team is discussed. The evaluation team's summative research design involved a data-driven assessment of the effectiveness of the AF4Q program at large, assessments of the impact of AF4Q in the specific programmatic areas, and an assessment of how the AF4Q alliances were positioned for the future at the end of the program. The AF4Q initiative was the largest privately funded community-based healthcare improvement initiative in the United States to date and was implemented at a time of rapid change in national healthcare policy. The implementation of large-scale, multi-site initiatives is becoming an increasingly common approach for addressing problems in healthcare. The summative evaluation research design for the AF4Q initiative, and the lessons learned from its approach, may be valuable to others tasked with evaluating similarly complex community-based initiatives.
Towards a neural basis of music perception.
Koelsch, Stefan; Siebel, Walter A
2005-12-01
Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory scene analysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, influences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides an overview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research findings about where in the brain these modules might be located.
Tsatsishvili, Valeri; Burunat, Iballa; Cong, Fengyu; Toiviainen, Petri; Alluri, Vinoo; Ristaniemi, Tapani
2018-06-01
There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing. Copyright © 2018 Elsevier B.V. All rights reserved.
Genre Complexes in Popular Music
Childress, C. Clayton
2016-01-01
Recent work in the sociology of music suggests a declining importance of genre categories. Yet other work in this research stream and in the sociology of classification argues for the continued prevalence of genres as a meaningful tool through which creators, critics and consumers focus their attention in the topology of available works. Building from work in the study of categories and categorization we examine how boundary strength and internal differentiation structure the genre pairings of some 3 million musicians and groups. Using a range of network-based and statistical techniques, we uncover three musical “complexes,” which are collectively constituted by 16 smaller genre communities. Our analysis shows that the musical universe is not monolithically organized but rather composed of multiple worlds that are differently structured—i.e., uncentered, single-centered, and multi-centered. PMID:27203852
Perception of Simultaneous Auditive Contents
NASA Astrophysics Data System (ADS)
Tschinkel, Christian
Based on a model of pluralistic music, we may approach an aesthetic concept of music, which employs dichotic listening situations. The concept of dichotic listening stems from neuropsychological test conditions in lateralization experiments on brain hemispheres, in which each ear is exposed to a different auditory content. In the framework of such sound experiments, the question which primarily arises concerns a new kind of hearing, which is also conceivable without earphones as a spatial composition, and which may superficially be linked to its degree of complexity. From a psychological perspective, the degree of complexity is correlated with the degree of attention given, with the listener's musical or listening experience and the level of his appreciation. Therefore, we may possibly also expect a measurable increase in physical activity. Furthermore, a dialectic interpretation of such "dualistic" music presents itself.
Genre Complexes in Popular Music.
Silver, Daniel; Lee, Monica; Childress, C Clayton
2016-01-01
Recent work in the sociology of music suggests a declining importance of genre categories. Yet other work in this research stream and in the sociology of classification argues for the continued prevalence of genres as a meaningful tool through which creators, critics and consumers focus their attention in the topology of available works. Building from work in the study of categories and categorization we examine how boundary strength and internal differentiation structure the genre pairings of some 3 million musicians and groups. Using a range of network-based and statistical techniques, we uncover three musical "complexes," which are collectively constituted by 16 smaller genre communities. Our analysis shows that the musical universe is not monolithically organized but rather composed of multiple worlds that are differently structured-i.e., uncentered, single-centered, and multi-centered.
Music and Language Syntax Interact in Broca's Area: An fMRI Study.
Kunert, Richard; Willems, Roel M; Casasanto, Daniel; Patel, Aniruddh D; Hagoort, Peter
2015-01-01
Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca's area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca's area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains-music and language-might draw on the same high level syntactic integration resources in Broca's area.
Music and Language Syntax Interact in Broca’s Area: An fMRI Study
Kunert, Richard; Willems, Roel M.; Casasanto, Daniel; Patel, Aniruddh D.; Hagoort, Peter
2015-01-01
Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area. PMID:26536026
Music and language perception: expectations, structural integration, and cognitive sequencing.
Tillmann, Barbara
2012-10-01
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes. Copyright © 2012 Cognitive Science Society, Inc.
Emotion regulation through listening to music in everyday situations.
Thoma, Myriam V; Ryf, Stefan; Mohiyeddini, Changiz; Ehlert, Ulrike; Nater, Urs M
2012-01-01
Music is a stimulus capable of triggering an array of basic and complex emotions. We investigated whether and how individuals employ music to induce specific emotional states in everyday situations for the purpose of emotion regulation. Furthermore, we wanted to examine whether specific emotion-regulation styles influence music selection in specific situations. Participants indicated how likely it would be that they would want to listen to various pieces of music (which are known to elicit specific emotions) in various emotional situations. Data analyses by means of non-metric multidimensional scaling revealed a clear preference for pieces of music that were emotionally congruent with an emotional situation. In addition, we found that specific emotion-regulation styles might influence the selection of pieces of music characterised by specific emotions. Our findings demonstrate emotion-congruent music selection and highlight the important role of specific emotion-regulation styles in the selection of music in everyday situations.
Williams, Nancy; Dooyema, Carrie A; Foltz, Jennifer L; Belay, Brook; Blanck, Heidi M
2015-02-01
Comprehensive multisector, multilevel approaches are needed to address childhood obesity. This article introduces the structure of a multidisciplinary team approach used to support and guide the multisite, multisector interventions implemented as part of the Childhood Obesity Research Demonstration (CORD) project. This article will describe the function, roles, and lessons learned from the CDC-CORD approach to project management. The CORD project works across multisectors and multilevels in three demonstration communities. Working with principal investigators and their research teams who are engaging multiple stakeholder groups, including community organizations, schools and child care centers, health departments, and healthcare providers, can be a complex endeavor. To best support the community-based research project, scientific and programmatic expertise in a wide range of areas was required. The team was configured based on the skill sets needed to interact with the various levels of staff working with the project. By thoughtful development of the team and processes, an efficient system for supporting the multisite, multisector intervention project sites was developed. The team approach will be formally evaluated at the end of the project period.
Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation.
Ahmed, Duha G; Paquette, Sebastian; Zeitouni, Anthony; Lehmann, Alexandre
2018-05-01
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Gfeller, Kate; Christ, Aaron; Knutson, John; Witt, Shelley; Mehr, Maureen
2003-01-01
The purposes of this study were (a) to develop a test of complex song appraisal that would be suitable for use with adults who use a cochlear implant (assistive hearing device) and (b) to compare the appraisal ratings (liking) of complex songs by adults who use cochlear implants (n = 66) with a comparison group of adults with normal hearing (n = 36). The article describes the development of a computerized test for appraisal, with emphasis on its theoretical basis and the process for item selection of naturalistic stimuli. The appraisal test was administered to the 2 groups to determine the effects of prior song familiarity and subjective complexity on complex song appraisal. Comparison of the 2 groups indicates that the implant users rate 2 of 3 musical genres (country western, pop) as significantly more complex than do normal hearing adults, and give significantly less positive ratings to classical music than do normal hearing adults. Appraisal responses of implant recipients were examined in relation to hearing history, age, performance on speech perception and cognitive tests, and musical background.
The Fourth Sociology and Music Education: Towards a Sociology of Integration
ERIC Educational Resources Information Center
Wright, Ruth
2014-01-01
By identifying three main sociologies that characterise broad movements in the field since its inception, this paper provides a background to considerations of music education from the perspective of sociology. A fourth sociology is then proposed that may be useful to interrogate the complexities of the field of 21st century music education. This…
Teaching Movable "Du": Guidelines for Developing Enrhythmic Reading Skills
ERIC Educational Resources Information Center
Dalby, Bruce
2015-01-01
Reading music notation with fluency is a complex skill requiring well-founded instruction by the music teacher and diligent practice on the part of the learner. The task is complicated by the fact that there are multiple ways to notate a given rhythm. Beginning music students typically have their first encounter with enrhythmic notation when they…
Navigating the Maze of Music Rights
ERIC Educational Resources Information Center
DuBoff, Leonard D.
2007-01-01
Music copyright is one of the most complex areas of intellectual property law. To begin with, there is a copyright in notated music and a copyright in accompanying lyrics. When the piece is performed, there is a copyright in the performance that is separate and apart from the copyright in the underlying work. If a sound recording is used in…
Prewhitening of Colored Noise Fields for Detection of Threshold Sources
1993-11-07
determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6
Pitch Error Analysis of Young Piano Students' Music Reading Performances
ERIC Educational Resources Information Center
Rut Gudmundsdottir, Helga
2010-01-01
This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…
From Research to the General Music Classroom
ERIC Educational Resources Information Center
Madsen, Clifford K.
2011-01-01
One challenge for music educators is to find techniques to help students "listen across time" to the examples they are assigned to study and to stay focused on a piece as they listen. Measurement tools to assess music listening have a long history, ranging from very simple to very complex, and very dated to very recent. This article traces the…
Music: a unique window into the world of autism.
Molnar-Szakacs, Istvan; Heaton, Pamela
2012-04-01
Understanding emotions is fundamental to our ability to navigate the complex world of human social interaction. Individuals with autism spectrum disorders (ASD) experience difficulties with the communication and understanding of emotions within the social domain. Their ability to interpret other people's nonverbal, facial, and bodily expressions of emotion is strongly curtailed. However, there is evidence to suggest that many individuals with ASD show a strong and early preference for music and are able to understand simple and complex musical emotions in childhood and adulthood. The dissociation between emotion recognition abilities in musical and social domains in individuals with ASD provides us with the opportunity to consider the nature of emotion processing difficulties characterizing this disorder. There has recently been a surge of interest in musical abilities in individuals with ASD, and this has motivated new behavioral and neuroimaging studies. Here, we review this new work. We conclude by providing some questions for future directions. © 2012 New York Academy of Sciences.
Einarsson, Anna; Ziemke, Tom
2017-01-01
The question motivating the work presented here, starting from a view of music as embodied and situated activity, is how can we account for the complexity of interactive music performance situations. These are situations in which human performers interact with responsive technologies, such as sensor-driven technology or sound synthesis affected by analysis of the performed sound signal. This requires investigating in detail the underlying mechanisms, but also providing a more holistic approach that does not lose track of the complex whole constituted by the interactions and relationships of composers, performers, audience, technologies, etc. The concept of affordances has frequently been invoked in musical research, which has seen a “bodily turn” in recent years, similar to the development of the embodied cognition approach in the cognitive sciences. We therefore begin by broadly delineating its usage in the cognitive sciences in general, and in music research in particular. We argue that what is still missing in the discourse on musical affordances is an encompassing theoretical framework incorporating the sociocultural dimensions that are fundamental to the situatedness and embodiment of interactive music performance and composition. We further argue that the cultural affordances framework, proposed by Rietveld and Kiverstein (2014) and recently articulated further by Ramstead et al. (2016) in this journal, although not previously applied to music, constitutes a promising starting point. It captures and elucidates this complex web of relationships in terms of shared landscapes and individual fields of affordances. We illustrate this with examples foremost from the first author's artistic work as composer and performer of interactive music. This sheds new light on musical composition as a process of construction—and embodied mental simulation—of situations, guiding the performers' and audience's attention in shifting fields of affordances. More generally, we believe that the theoretical perspectives and concrete examples discussed in this paper help to elucidate how situations—and with them affordances—are dynamically constructed through the interactions of various mechanisms as people engage in embodied and situated activity. PMID:29033880
Reporting Guidelines for Music-based Interventions
Robb, Sheri L.; Burns, Debra S.; Carpenter, Janet S.
2013-01-01
Music-based interventions are used to address a variety of problems experienced by individuals across the developmental lifespan (infants to elderly adults). In order to improve the transparency and specificity of reporting music-based interventions, a set of specific reporting guidelines is recommended. Recommendations pertain to reporting seven different components of music-based interventions including intervention theory, intervention content, intervention delivery schedule, interventionist, treatment fidelity, setting, and unit of delivery. Recommendations are intended to support CONSORT and TREND statements for transparent reporting of interventions while taking into account the variety, complexity, and uniqueness of music-based interventions. PMID:23646227
Impaired perception of harmonic complexity in congenital amusia: a case study.
Reed, Catherine L; Cahn, Steven J; Cory, Christopher; Szaflarski, Jerzy P
2011-07-01
This study investigates whether congenital amusia (an inability to perceive music from birth) also impairs the perception of musical qualities that do not rely on fine-grained pitch discrimination. We established that G.G. (64-year-old male, age-typical hearing) met the criteria of congenital amusia and demonstrated music-specific deficits (e.g., language processing, intonation, prosody, fine-grained pitch processing, pitch discrimination, identification of discrepant tones and direction of pitch for tones in a series, pitch discrimination within scale segments, predictability of tone sequences, recognition versus knowing memory for melodies, and short-term memory for melodies). Next, we conducted tests of tonal fusion, harmonic complexity, and affect perception: recognizing timbre, assessing consonance and dissonance, and recognizing musical affect from harmony. G.G. displayed relatively unimpaired perception and production of environmental sounds, prosody, and emotion conveyed by speech compared with impaired fine-grained pitch perception, tonal sequence discrimination, and melody recognition. Importantly, G.G. could not perform tests of tonal fusion that do not rely on pitch discrimination: He could not distinguish concurrent notes, timbre, consonance/dissonance, simultaneous notes, and musical affect. Results indicate at least three distinct problems-one with pitch discrimination, one with harmonic simultaneity, and one with musical affect-and each has distinct consequences for music perception.
Affordances and the musically extended mind.
Krueger, Joel
2014-01-06
I defend a model of the musically extended mind. I consider how acts of "musicking" grant access to novel emotional experiences otherwise inaccessible. First, I discuss the idea of "musical affordances" and specify both what musical affordances are and how they invite different forms of entrainment. Next, I argue that musical affordances - via soliciting different forms of entrainment - enhance the functionality of various endogenous, emotion-granting regulative processes, drawing novel experiences out of us with an expanded complexity and phenomenal character. I argue that music therefore ought to be thought of as part of the vehicle needed to realize these emotional experiences. I appeal to different sources of empirical work to develop this idea.
Language, music, syntax and the brain.
Patel, Aniruddh D
2003-07-01
The comparative study of music and language is drawing an increasing amount of research interest. Like language, music is a human universal involving perceptually discrete elements organized into hierarchically structured sequences. Music and language can thus serve as foils for each other in the study of brain mechanisms underlying complex sound processing, and comparative research can provide novel insights into the functional and neural architecture of both domains. This review focuses on syntax, using recent neuroimaging data and cognitive theory to propose a specific point of convergence between syntactic processing in language and music. This leads to testable predictions, including the prediction that that syntactic comprehension problems in Broca's aphasia are not selective to language but influence music perception as well.
Affordances and the musically extended mind
Krueger, Joel
2014-01-01
I defend a model of the musically extended mind. I consider how acts of “musicking” grant access to novel emotional experiences otherwise inaccessible. First, I discuss the idea of “musical affordances” and specify both what musical affordances are and how they invite different forms of entrainment. Next, I argue that musical affordances – via soliciting different forms of entrainment – enhance the functionality of various endogenous, emotion-granting regulative processes, drawing novel experiences out of us with an expanded complexity and phenomenal character. I argue that music therefore ought to be thought of as part of the vehicle needed to realize these emotional experiences. I appeal to different sources of empirical work to develop this idea. PMID:24432008
Harmonic Structure Predicts the Enjoyment of Uplifting Trance Music.
Agres, Kat; Herremans, Dorien; Bigo, Louis; Conklin, Darrell
2016-01-01
An empirical investigation of how local harmonic structures (e.g., chord progressions) contribute to the experience and enjoyment of uplifting trance (UT) music is presented. The connection between rhythmic and percussive elements and resulting trance-like states has been highlighted by musicologists, but no research, to our knowledge, has explored whether repeated harmonic elements influence affective responses in listeners of trance music. Two alternative hypotheses are discussed, the first highlighting the direct relationship between repetition/complexity and enjoyment, and the second based on the theoretical inverted-U relationship described by the Wundt curve. We investigate the connection between harmonic structure and subjective enjoyment through interdisciplinary behavioral and computational methods: First we discuss an experiment in which listeners provided enjoyment ratings for computer-generated UT anthems with varying levels of harmonic repetition and complexity. The anthems were generated using a statistical model trained on a corpus of 100 uplifting trance anthems created for this purpose, and harmonic structure was constrained by imposing particular repetition structures (semiotic patterns defining the order of chords in the sequence) on a professional UT music production template. Second, the relationship between harmonic structure and enjoyment is further explored using two computational approaches, one based on average Information Content, and another that measures average tonal tension between chords. The results of the listening experiment indicate that harmonic repetition does in fact contribute to the enjoyment of uplifting trance music. More compelling evidence was found for the second hypothesis discussed above, however some maximally repetitive structures were also preferred. Both computational models provide evidence for a Wundt-type relationship between complexity and enjoyment. By systematically manipulating the structure of chord progressions, we have discovered specific harmonic contexts in which repetitive or complex structure contribute to the enjoyment of uplifting trance music.
Harmonic Structure Predicts the Enjoyment of Uplifting Trance Music
Agres, Kat; Herremans, Dorien; Bigo, Louis; Conklin, Darrell
2017-01-01
An empirical investigation of how local harmonic structures (e.g., chord progressions) contribute to the experience and enjoyment of uplifting trance (UT) music is presented. The connection between rhythmic and percussive elements and resulting trance-like states has been highlighted by musicologists, but no research, to our knowledge, has explored whether repeated harmonic elements influence affective responses in listeners of trance music. Two alternative hypotheses are discussed, the first highlighting the direct relationship between repetition/complexity and enjoyment, and the second based on the theoretical inverted-U relationship described by the Wundt curve. We investigate the connection between harmonic structure and subjective enjoyment through interdisciplinary behavioral and computational methods: First we discuss an experiment in which listeners provided enjoyment ratings for computer-generated UT anthems with varying levels of harmonic repetition and complexity. The anthems were generated using a statistical model trained on a corpus of 100 uplifting trance anthems created for this purpose, and harmonic structure was constrained by imposing particular repetition structures (semiotic patterns defining the order of chords in the sequence) on a professional UT music production template. Second, the relationship between harmonic structure and enjoyment is further explored using two computational approaches, one based on average Information Content, and another that measures average tonal tension between chords. The results of the listening experiment indicate that harmonic repetition does in fact contribute to the enjoyment of uplifting trance music. More compelling evidence was found for the second hypothesis discussed above, however some maximally repetitive structures were also preferred. Both computational models provide evidence for a Wundt-type relationship between complexity and enjoyment. By systematically manipulating the structure of chord progressions, we have discovered specific harmonic contexts in which repetitive or complex structure contribute to the enjoyment of uplifting trance music. PMID:28119641
The 'ripple effect': Towards researching improvisational music therapy in dementia care homes.
Pavlicevic, Mercédès; Tsiris, Giorgos; Wood, Stuart; Powell, Harriet; Graham, Janet; Sanderson, Richard; Millman, Rachel; Gibson, Jane
2015-09-01
Increased interest in, and demand for, music therapy provision for persons with dementia prompted this study's exploration of music therapists' strategies for creating musical communities in dementia care settings, considering the needs and resources of people affected by dementia. Focus group discussions and detailed iterative study of improvisational music therapy work by six experienced practitioners clarify the contextual immediacy and socio-musical complexities of music therapy in dementia care homes. Music therapy's 'ripple effect', with resonances from micro (person-to-person musicking), to meso (musicking beyond 'session time') and macro level (within the care home and beyond), implies that all who are part of the dementia care ecology need opportunities for flourishing, shared participation, and for expanded self-identities; beyond 'staff', 'residents', or 'being in distress'. On such basis, managers and funders might consider an extended brief for music therapists' roles, to include generating and maintaining musical wellbeing throughout residential care settings. © The Author(s) 2013.
Robb, Sheri L; Burns, Debra S; Stegenga, Kristin A; Haut, Paul R; Monahan, Patrick O; Meza, Jane; Stump, Timothy E; Cherven, Brooke O; Docherty, Sharron L; Hendricks-Ferguson, Verna L; Kintner, Eileen K; Haight, Ann E; Wall, Donna A; Haase, Joan E
2014-03-15
To reduce the risk of adjustment problems associated with hematopoietic stem cell transplant (HSCT) for adolescents/young adults (AYAs), we examined efficacy of a therapeutic music video (TMV) intervention delivered during the acute phase of HSCT to: 1) increase protective factors of spiritual perspective, social integration, family environment, courageous coping, and hope-derived meaning; 2) decrease risk factors of illness-related distress and defensive coping; and 3) increase outcomes of self-transcendence and resilience. This was a multisite randomized, controlled trial (COG-ANUR0631) conducted at 8 Children's Oncology Group sites involving 113 AYAs aged 11-24 years undergoing myeloablative HSCT. Participants, randomized to the TMV or low-dose control (audiobooks) group, completed 6 sessions over 3 weeks with a board-certified music therapist. Variables were based on Haase's Resilience in Illness Model (RIM). Participants completed measures related to latent variables of illness-related distress, social integration, spiritual perspective, family environment, coping, hope-derived meaning, and resilience at baseline (T1), postintervention (T2), and 100 days posttransplant (T3). At T2, the TMV group reported significantly better courageous coping (Effect Size [ES], 0.505; P = .030). At T3, the TMV group reported significantly better social integration (ES, 0.543; P = .028) and family environment (ES, 0.663; P = .008), as well as moderate nonsignificant effect sizes for spiritual perspective (ES, 0.450; P = .071) and self-transcendence (ES, 0.424; P = .088). The TMV intervention improves positive health outcomes of courageous coping, social integration, and family environment during a high-risk cancer treatment. We recommend the TMV be examined in a broader population of AYAs with high-risk cancers. © 2013 American Cancer Society.
Schmithorst, Vincent J
2005-04-01
Music perception is a quite complex cognitive task, involving the perception and integration of various elements including melody, harmony, pitch, rhythm, and timbre. A preliminary functional MRI investigation of music perception was performed, using a simplified passive listening task. Group independent component analysis (ICA) was used to separate out various components involved in music processing, as the hemodynamic responses are not known a priori. Various components consistent with auditory processing, expressive language, syntactic processing, and visual association were found. The results are discussed in light of various hypotheses regarding modularity of music processing and its overlap with language processing. The results suggest that, while some networks overlap with ones used for language processing, music processing may involve its own domain-specific processing subsystems.
Familiarity Affects Entrainment of EEG in Music Listening.
Kumagai, Yuiko; Arvaneh, Mahnaz; Tanaka, Toshihisa
2017-01-01
Music perception involves complex brain functions. The relationship between music and brain such as cortical entrainment to periodic tune, periodic beat, and music have been well investigated. It has also been reported that the cerebral cortex responded more strongly to the periodic rhythm of unfamiliar music than to that of familiar music. However, previous works mainly used simple and artificial auditory stimuli like pure tone or beep. It is still unclear how the brain response is influenced by the familiarity of music. To address this issue, we analyzed electroencelphalogram (EEG) to investigate the relationship between cortical response and familiarity of music using melodies produced by piano sounds as simple natural stimuli. The cross-correlation function averaged across trials, channels, and participants showed two pronounced peaks at time lags around 70 and 140 ms. At the two peaks the magnitude of the cross-correlation values were significantly larger when listening to unfamiliar and scrambled music compared to those when listening to familiar music. Our findings suggest that the response to unfamiliar music is stronger than that to familiar music. One potential application of our findings would be the discrimination of listeners' familiarity with music, which provides an important tool for assessment of brain activity.
The Long-Term Effects of Childhood Music Instruction on Intelligence and General Cognitive Abilities
ERIC Educational Resources Information Center
Costa-Giomi, Eugenia
2015-01-01
This article reviews research on the effects of music instruction on general cognitive abilities. The review of more than 75 reports shows (1) the consistency in results pertaining to the short-term effects of music instruction on cognitive abilities and the lack of clear evidence on the long-term effects on intelligence; (2) the complex nature of…
NASA Astrophysics Data System (ADS)
Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun
2012-04-01
In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.
Using music to study the evolution of cognitive mechanisms relevant to language.
Patel, Aniruddh D
2017-02-01
This article argues that music can be used in cross-species research to study the evolution of cognitive mechanisms relevant to spoken language. This is because music and language share certain cognitive processing mechanisms and because music offers specific advantages for cross-species research. Music has relatively simple building blocks (tones without semantic properties), yet these building blocks are combined into rich hierarchical structures that engage complex cognitive processing. I illustrate this point with regard to the processing of musical harmonic structure. Because the processing of musical harmonic structure has been shown to interact with linguistic syntactic processing in humans, it is of interest to know if other species can acquire implicit knowledge of harmonic structure through extended exposure to music during development (vs. through explicit training). I suggest that domestic dogs would be a good species to study in addressing this question.
The neural processing of hierarchical structure in music and speech at different timescales
Farbood, Morwaread M.; Heeger, David J.; Marcus, Gary; Hasson, Uri; Lerner, Yulia
2015-01-01
Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales—by measure, phrase, and section—while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures—words/measures, sentences/musical phrases, paragraph/sections—are processed separately. PMID:26029037
The neural processing of hierarchical structure in music and speech at different timescales.
Farbood, Morwaread M; Heeger, David J; Marcus, Gary; Hasson, Uri; Lerner, Yulia
2015-01-01
Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales-by measure, phrase, and section-while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures-words/measures, sentences/musical phrases, paragraph/sections-are processed separately.
Complex network structure of musical compositions: Algorithmic generation of appealing music
NASA Astrophysics Data System (ADS)
Liu, Xiao Fan; Tse, Chi K.; Small, Michael
2010-01-01
In this paper we construct networks for music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurring connections. We analyze classical music from Bach, Mozart, Chopin, as well as other types of music such as Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. We conjecture that preserving the universal network properties is a necessary step in artificial composition of music. Power-law exponents of node degree, node strength and/or edge weight distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be composed artificially using a controlled random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. By generating a large number of compositions, we find that this algorithm generates music which has the necessary qualities to be subjectively judged as appealing.
Effect of Voice-Part Training and Music Complexity on Focus of Attention to Melody or Harmony
ERIC Educational Resources Information Center
Williams, Lindsey R.
2009-01-01
The purpose of this study was to investigate the possible effects of choral voice-part training/experience and music complexity on focus of attention to melody or harmony. Participants (N = 150) were members of auditioned university choral ensembles divided by voice-part (sopranos, n = 44; altos, n = 33; tenors, n = 35; basses, n = 38). The music…
The Interplay of Preference, Familiarity and Psychophysical Properties in Defining Relaxation Music.
Tan, Xueli; Yowler, Charles J; Super, Dennis M; Fratianne, Richard B
2012-01-01
The stress response has been well documented in past music therapy literature. However, hypometabolism, or the relaxation response, has received much less attention. Music therapists have long utilized various music-assisted relaxation techniques with both live and recorded music to elicit such a response. The ongoing proliferations of relaxation music through commercial media and the dire lack of evidence to support such claims warrant attention from healthcare professionals and music therapists. The purpose of these 3 studies was to investigate the correlational relationships between 12 psychophysical properties of music, preference, familiarity, and degree of perceived relaxation in music. Fourteen music therapists recommended and analyzed 30 selections of relaxation music. A group of 80 healthy adults then rated their familiarity, preference, and degree of perceived relaxation in the music. The analysis provided a detailed description of the intrinsic properties in music that were perceived to be relaxing by listeners. These properties included tempo, mode, harmonic, rhythmic, instrumental, and melodic complexities, timbre, vocalization/lyrics, pitch range, dynamic variations, and contour. In addition, music preference was highly correlated with listeners' perception of relaxation in music for both music therapists and healthy adults. The correlation between familiarity and degree of relaxation reached significance in the healthy adult group. Results from this study provided an in-depth operational definition of the intrinsic parameters in relaxation music and also highlighted the importance of preference and familiarity in eliciting the relaxation response.
Music acquisition: effects of enculturation and formal training on development.
Hannon, Erin E; Trainor, Laurel J
2007-11-01
Musical structure is complex, consisting of a small set of elements that combine to form hierarchical levels of pitch and temporal structure according to grammatical rules. As with language, different systems use different elements and rules for combination. Drawing on recent findings, we propose that music acquisition begins with basic features, such as peripheral frequency-coding mechanisms and multisensory timing connections, and proceeds through enculturation, whereby everyday exposure to a particular music system creates, in a systematic order of acquisition, culture-specific brain structures and representations. Finally, we propose that formal musical training invokes domain-specific processes that affect salience of musical input and the amount of cortical tissue devoted to its processing, as well as domain-general processes of attention and executive functioning.
The multisensory brain and its ability to learn music.
Zimmerman, Emily; Lahav, Amir
2012-04-01
Playing a musical instrument requires a complex skill set that depends on the brain's ability to quickly integrate information from multiple senses. It has been well documented that intensive musical training alters brain structure and function within and across multisensory brain regions, supporting the experience-dependent plasticity model. Here, we argue that this experience-dependent plasticity occurs because of the multisensory nature of the brain and may be an important contributing factor to musical learning. This review highlights key multisensory regions within the brain and discusses their role in the context of music learning and rehabilitation. © 2012 New York Academy of Sciences.
Evaluation of a Complex, Multisite, Multilevel Grants Initiative
ERIC Educational Resources Information Center
Rollison, Julia; Hill, Gary; Yu, Ping; Murray, Stephen; Mannix, Danyelle; Mathews-Younes, Anne; Wells, Michael E.
2012-01-01
The Safe Schools/Healthy Students (SS/HS) national evaluation seeks to assess both the implementation process and the results of the SS/HS initiative, exploring factors that have contributed to or detracted from grantee success. Each site is required to forge partnerships with representatives from education, mental health, juvenile justice, and…
Trainor, Laurel J.
2015-01-01
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory–motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation. PMID:25646512
Perception of Leitmotives in Richard Wagner's Der Ring des Nibelungen.
Baker, David J; Müllensiefen, Daniel
2017-01-01
The music of Richard Wagner tends to generate very diverse judgments indicative of the complex relationship between listeners and the sophisticated musical structures in Wagner's music. This paper presents findings from two listening experiments using the music from Wagner's Der Ring des Nibelungen that explores musical as well as individual listener parameters to better understand how listeners are able to hear leitmotives, a compositional device closely associated with Wagner's music. Results confirm findings from a previous experiment showing that specific expertise with Wagner's music can account for a greater portion of the variance in an individual's ability to recognize and remember musical material compared to measures of generic musical training. Results also explore how acoustical distance of the leitmotives affects memory recognition using a chroma similarity measure. In addition, we show how characteristics of the compositional structure of the leitmotives contributes to their salience and memorability. A final model is then presented that accounts for the aforementioned individual differences factors, as well as parameters of musical surface and structure. Our results suggest that that future work in music perception may consider both individual differences variables beyond musical training, as well as symbolic features and audio commonly used in music information retrieval in order to build robust models of musical perception and cognition.
Trainor, Laurel J
2015-03-19
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory-motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation.
Zhang, Yizhen; Chen, Gang; Wen, Haiguang; Lu, Kun-Han; Liu, Zhongming
2017-12-06
Musical imagery is the human experience of imagining music without actually hearing it. The neural basis of this mental ability is unclear, especially for musicians capable of engaging in accurate and vivid musical imagery. Here, we created a visualization of an 8-minute symphony as a silent movie and used it as real-time cue for musicians to continuously imagine the music for repeated and synchronized sessions during functional magnetic resonance imaging (fMRI). The activations and networks evoked by musical imagery were compared with those elicited by the subjects directly listening to the same music. Musical imagery and musical perception resulted in overlapping activations at the anterolateral belt and Wernicke's area, where the responses were correlated with the auditory features of the music. Whereas Wernicke's area interacted within the intrinsic auditory network during musical perception, it was involved in much more complex networks during musical imagery, showing positive correlations with the dorsal attention network and the motor-control network and negative correlations with the default-mode network. Our results highlight the important role of Wernicke's area in forming vivid musical imagery through bilateral and anti-correlated network interactions, challenging the conventional view of segregated and lateralized processing of music versus language.
Perception of Leitmotives in Richard Wagner's Der Ring des Nibelungen
Baker, David J.; Müllensiefen, Daniel
2017-01-01
The music of Richard Wagner tends to generate very diverse judgments indicative of the complex relationship between listeners and the sophisticated musical structures in Wagner's music. This paper presents findings from two listening experiments using the music from Wagner's Der Ring des Nibelungen that explores musical as well as individual listener parameters to better understand how listeners are able to hear leitmotives, a compositional device closely associated with Wagner's music. Results confirm findings from a previous experiment showing that specific expertise with Wagner's music can account for a greater portion of the variance in an individual's ability to recognize and remember musical material compared to measures of generic musical training. Results also explore how acoustical distance of the leitmotives affects memory recognition using a chroma similarity measure. In addition, we show how characteristics of the compositional structure of the leitmotives contributes to their salience and memorability. A final model is then presented that accounts for the aforementioned individual differences factors, as well as parameters of musical surface and structure. Our results suggest that that future work in music perception may consider both individual differences variables beyond musical training, as well as symbolic features and audio commonly used in music information retrieval in order to build robust models of musical perception and cognition. PMID:28522981
Assessing the Effect of Musical Congruency on Wine Tasting in a Live Performance Setting
Wang, Qian (Janice)
2015-01-01
At a wine tasting event with live classical music, we assessed whether participants would agree that certain wine and music pairings were congruent. We also assessed the effect of musical congruency on the wine tasting experience. The participants were given two wines to taste and two pieces of music—one chosen to match each wine—were performed live. Half of the participants tasted the wines while listening to the putatively more congruent music, the rest tasted the wines while listening to the putatively less congruent music. The participants rated the wine–music match and assessed the fruitiness, acidity, tannins, richness, complexity, length, and pleasantness of the wines. The results revealed that the music chosen to be congruent with each wine was indeed rated as a better match than the other piece of music. Furthermore, the music playing in the background also had a significant effect on the perceived acidity and fruitiness of the wines. These findings therefore provide further support for the view that music can modify the wine drinking experience. However, the present results leave open the question of whether the crossmodal congruency between music and wine itself has any overarching influence on the wine drinking experience. PMID:27433313
Heads Up, Shoulders Straight, Stick and Twirl Together
ERIC Educational Resources Information Center
Warrick, James
1977-01-01
With so many roles to juggle and so many complex music problems to resolve, some marching band directors overlook simple rules of thumb to increase their bands' visual and musical impact. Here are some guidelines. (Author/RK)
Ogunyemi, A O; Breen, H
1993-01-01
Musicogenic epilepsy is a rare disorder. Much remains to be learned about the electroclinical features. This report describes a patient who has been followed at our institution for 17 years, and was investigated with long-term telemetered simultaneous video-EEG recordings. She began to have seizures at the age of 10 years. She experienced complex partial seizures, often preceded by elementary auditory hallucination and complex auditory illusion. The seizures occurred in relation to singing, listening to music or thinking about music. She also had occasional generalized tonic clonic seizures during sleep. There was no significant antecedent history. The family history was negative for epilepsy. The physical examination was unremarkable. CT and MRI scans of the brain were normal. During long-term simultaneous video-EEG recordings, clinical and electrographic seizure activities were recorded in association with singing and listening to music. Mathematical calculation, copying or viewing geometric patterns and playing the game of chess failed to evoke seizures.
Pedagogical applications of cognitive research on musical improvisation
Biasutti, Michele
2015-01-01
This paper presents a model for the implementation of educational activities involving musical improvisation that is based on a review of the literature on the psychology of music. Psychology of music is a complex field of research in which quantitative and qualitative methods have been employed involving participants ranging from novices to expert performers. The cognitive research has been analyzed to propose a pedagogical approach to the development of processes rather than products that focus on an expert’s use of improvisation. The intention is to delineate a reflective approach that goes beyond the mere instruction of some current practices of teaching improvisation in jazz pedagogy. The review highlights that improvisation is a complex, multidimensional act that involves creative and performance behaviors in real-time in addition to processes such as sensory and perceptual encoding, motor control, performance monitoring, and memory storage and recall. Educational applications for the following processes are outlined: anticipation, use of repertoire, emotive communication, feedback, and flow. These characteristics are discussed in relation to the design of a pedagogical approach to musical improvisation based on reflection and metacognition development. PMID:26029147
Pedagogical applications of cognitive research on musical improvisation.
Biasutti, Michele
2015-01-01
This paper presents a model for the implementation of educational activities involving musical improvisation that is based on a review of the literature on the psychology of music. Psychology of music is a complex field of research in which quantitative and qualitative methods have been employed involving participants ranging from novices to expert performers. The cognitive research has been analyzed to propose a pedagogical approach to the development of processes rather than products that focus on an expert's use of improvisation. The intention is to delineate a reflective approach that goes beyond the mere instruction of some current practices of teaching improvisation in jazz pedagogy. The review highlights that improvisation is a complex, multidimensional act that involves creative and performance behaviors in real-time in addition to processes such as sensory and perceptual encoding, motor control, performance monitoring, and memory storage and recall. Educational applications for the following processes are outlined: anticipation, use of repertoire, emotive communication, feedback, and flow. These characteristics are discussed in relation to the design of a pedagogical approach to musical improvisation based on reflection and metacognition development.
López-Íñiguez, Guadalupe; Pozo, Juan Ignacio
2014-06-01
Despite increasing interest in teachers' and students' conceptions of learning and teaching, and how they influence their practice, there are few studies testing the influence of teachers' conceptions on their students' learning. This study tests how teaching conception (TC; with a distinction between direct and constructive) influences students' representations regarding sheet music. Sixty students (8-12 years old) from music conservatories: 30 of them took lessons with teachers with a constructive TC and another 30 with teachers shown to have a direct TC. Children were given a musical comprehension task in which they were asked to select and rank the contents they needed to learn. These contents had different levels of processing and complexity: symbolic, analytical, and referential. Three factorial ANOVAs, two-one-way ANOVAs, and four 2 × 3 repeated-measures ANOVAs were used to analyse the effects of and the interaction between the independent variables TC and class, both for/on total cards selected, their ranking, and each sub-category (the three processing levels). ANOVAs on the selection and ranking of these contents showed that teachers' conceptions seem to mediate significantly in the way the students understand the music. Students from constructive teachers have more complex and deep understanding of music. They select more elements for learning scores than those from traditional teachers. Teaching conception also influences the way in which children rank those elements. No difference exists between the way 8- and 12-year-olds learn scores. Children's understanding of the scores is more complex than assumed in other studies. © 2013 The British Psychological Society.
Using music in leisure to enhance social relationships with patients with complex disabilities.
Magee, Wendy L; Bowen, Ceri
2008-01-01
Acquired and complex disabilities stemming from severe brain damage and neurological illness usually affect communication, cognitive, physical or sensory abilities in any combination. Improved understanding of the care needs of people with complex disabilities has addressed many functional aspects of care. However, relatives and carers can be left at a loss knowing how to provide or share in meaningful activities with someone who can no longer communicate or respond to their environment. As a result, the individual with complex needs can become increasingly isolated from their previous support network. Based on theoretical foundations for music as instinctive in human beings, this paper offers practical recommendations for the creative use of music for people with complex physical and sensory needs which prevent active participation in previous leisure pursuits. Recommendations are made for relatives and carers to manage the environment of an individual who has limited capacity to control their environment or make choices about leisure activities. Particular emphasis is given to activities which can be shared between a facilitator and the patient, thereby enhancing social relationships.
How Can Multi-Site Evaluations Be Participatory?
ERIC Educational Resources Information Center
Lawrenz, Frances; Huffman, Douglas
2003-01-01
Multi-site evaluations are becoming increasingly common in federal funding portfolios. Although much thought has been given to multi-site evaluation, there has been little emphasis on how it might interact with participatory evaluation. Therefore, this paper reviews several National Science Foundation educational, multi-site evaluations for the…
Modeling Musical Context With Word2Vec
NASA Astrophysics Data System (ADS)
Herremans, Dorien; Chuan, Ching-Hua
2017-05-01
We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.
Matsuyama, Kumi; Ohsawa, Isao; Ogawa, Toyoaki
2007-04-01
Tuberous sclerosis complex (TSC) is an autosomal dominant disorder that manifests with symptoms that might include mental retardation, epilepsy, skin lesions, and hamartomas in the heart, brain, and kidneys. Anecdotal reports have characterized children with TSC as having high music responsiveness despite their developmental delay. This study is intended to investigate this putative musical skill of children with TSC and to elucidate the presence of non-delayed facets of their development. This study examined 11 children with TSC: 10 children with DSM-IV autism and 92 healthy children who participated as control subjects. Correlation was examined between results obtained using Non-Verbal MMRC, which is a validated musical responsiveness battery, and results of a scientifically accepted standardized pediatric developmental test: the New Edition of the Kyoto Scale of Psychological Development. Inter-rater reliability among the three raters was also assessed. The rhythm or melody score on the Non-Verbal MMRC and DA among children with TSC showed no significant correlation. In contrast, a significant correlation was found among normal children and those with autism. Moreover, the inter-rater reliability was good. The results demonstrate that children with TSC show high responsiveness to musical stimuli despite otherwise delayed development (e.g., language, cognition, motor skills). This report is the first stating that children with TSC have a unique tendency in terms of correlation between music and developmental age. These findings indicate a non-delayed area of TSC children's development and suggest the use of music as therapeutic intervention.
Evidence for shared cognitive processing of pitch in music and language.
Perrachione, Tyler K; Fedorenko, Evelina G; Vinke, Louis; Gibson, Edward; Dilley, Laura C
2013-01-01
Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct--either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.
Psychoanalytic and musical ambiguity: the tritone in gee, officer krupke.
Jaffee Nagel, Julie
2010-02-01
The poignant and timeless Broadway musical West Side Story is viewed from the standpoint of taking musical forms as psychoanalytic data. The musical configuration of notes called the tritone (or diabolus in musica) is taken as a sonic metaphor expressing ambiguity both in musical vocabulary and in mental life. The tritone, which historically and harmonically represents instability, is heard throughout the score and emphasizes the intrapsychic, interpersonal, and social dramas that unfold within and between the two gangs in West Side Story. Particular emphasis is given to the comic but exceedingly sober song Gee, Officer Krupke. Bernstein's sensitivity to the ambiguity and tension inherent in the tritone in West Side Story is conceptualized as an intersection of music theory and theories of mind; this perspective holds implications for clinical practice and transports psychoanalytic concepts from the couch to the Broadway stage and into the community to address the complexities of love, hate, aggression, prejudice, and violence. Ultimately, West Side Story cross-pollinates music and theater, as well as music and psychoanalytic concepts.
The relationship between musical skills, music training, and intonation analysis skills.
Dankovicová, Jana; House, Jill; Crooks, Anna; Jones, Katie
2007-01-01
Few attempts have been made to look systematically at the relationship between musical and intonation analysis skills, a relationship that has been to date suggested only by informal observations. Following Mackenzie Beck (2003), who showed that musical ability was a useful predictor of general phonetic skills, we report on two studies investigating the relationship between musical skills, musical training, and intonation analysis skills in English. The specially designed music tasks targeted pitch direction judgments and tonal memory. The intonation tasks involved locating the nucleus, identifying the nuclear tone in stimuli of different length and complexity, and same/different contour judgments. The subjects were university students with basic training in intonation analysis. Both studies revealed an overall significant relationship between musical training and intonation task scores, and between the music test scores and intonation test scores. A more detailed analysis, focusing on the relationship between the individual music and intonation tests, yielded a more complicated picture. The results are discussed with respect to differences and similarities between music and intonation, and with respect to form and function of intonation. Implications of musical training on development of intonation analysis skills are considered. We argue that it would be beneficial to investigate the differences between musically trained and untrained subjects in their analysis of both musical stimuli and intonational form from a cognitive point of view.
Ferreri, Laura; Bigand, Emmanuel; Bard, Patrick; Bugaiska, Aurélia
2015-01-01
Music can be thought of as a complex stimulus able to enrich the encoding of an event thus boosting its subsequent retrieval. However, several findings suggest that music can also interfere with memory performance. A better understanding of the behavioral and neural processes involved can substantially improve knowledge and shed new light on the most efficient music-based interventions. Based on fNIRS studies on music, episodic encoding, and the dorsolateral prefrontal cortex (PFC), this work aims to extend previous findings by monitoring the entire lateral PFC during both encoding and retrieval of verbal material. Nineteen participants were asked to encode lists of words presented with either background music or silence and subsequently tested during a free recall task. Meanwhile, their PFC was monitored using a 48-channel fNIRS system. Behavioral results showed greater chunking of words under the music condition, suggesting the employment of associative strategies for items encoded with music. fNIRS results showed that music provided a less demanding way of modulating both episodic encoding and retrieval, with a general prefrontal decreased activity under the music versus silence condition. This suggests that music-related memory processes rely on specific neural mechanisms and that music can positively influence both episodic encoding and retrieval of verbal information. PMID:26508813
Ferreri, Laura; Bigand, Emmanuel; Bard, Patrick; Bugaiska, Aurélia
2015-01-01
Music can be thought of as a complex stimulus able to enrich the encoding of an event thus boosting its subsequent retrieval. However, several findings suggest that music can also interfere with memory performance. A better understanding of the behavioral and neural processes involved can substantially improve knowledge and shed new light on the most efficient music-based interventions. Based on fNIRS studies on music, episodic encoding, and the dorsolateral prefrontal cortex (PFC), this work aims to extend previous findings by monitoring the entire lateral PFC during both encoding and retrieval of verbal material. Nineteen participants were asked to encode lists of words presented with either background music or silence and subsequently tested during a free recall task. Meanwhile, their PFC was monitored using a 48-channel fNIRS system. Behavioral results showed greater chunking of words under the music condition, suggesting the employment of associative strategies for items encoded with music. fNIRS results showed that music provided a less demanding way of modulating both episodic encoding and retrieval, with a general prefrontal decreased activity under the music versus silence condition. This suggests that music-related memory processes rely on specific neural mechanisms and that music can positively influence both episodic encoding and retrieval of verbal information.
King, Wade; Ahmed, Shihab U; Baisden, Jamie; Patel, Nileshkumar; Kennedy, David J; Duszynski, Belinda; MacVicar, John
2015-02-01
To assess the evidence on the validity of sacral lateral branch blocks and the effectiveness of sacral lateral branch thermal radiofrequency neurotomy in managing sacroiliac complex pain. Systematic review with comprehensive analysis of all published data. Six reviewers searched the literature on sacral lateral branch interventions. Each assessed the methodologies of studies found and the quality of the evidence presented. The outcomes assessed were diagnostic validity and effectiveness of treatment for sacroiliac complex pain. The evidence found was appraised in accordance with the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) system of evaluating scientific evidence. The searches yielded two primary publications on sacral lateral branch blocks and 15 studies of the effectiveness of sacral lateral branch thermal radiofrequency neurotomy. One study showed multisite, multidepth sacral lateral branch blocks can anesthetize the posterior sacroiliac ligaments. Therapeutic studies show sacral lateral branch thermal radiofrequency neurotomy can relieve sacroiliac complex pain to some extent. The evidence of the validity of these blocks and the effectiveness of this treatment were rated as moderate in accordance with the GRADE system. The literature on sacral lateral branch interventions is sparse. One study demonstrates the face validity of multisite, multidepth sacral lateral branch blocks for diagnosis of posterior sacroiliac complex pain. Some evidence of moderate quality exists on therapeutic procedures, but it is insufficient to determine the indications and effectiveness of sacral lateral branch thermal radiofrequency neurotomy, and more research is required. Wiley Periodicals, Inc.
Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J
2015-05-01
Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.
Beischel, Kelly P; Hart, Julie; Turkelson, Sandra L
2016-01-01
Multisite education research projects have many benefits as well as perceived barriers. In this article, we share our experiences with a multisite education research project and the barriers we overcame to reap the benefits. The outcome of our research resulted in increased rigor, role-modeling professional collaboration, and promotion of future multisite education studies. The strategies presented in this article will help alleviate perceived barriers and ameliorate the process of conducting multisite education research studies.
Investigation of musicality in birdsong.
Rothenberg, David; Roeske, Tina C; Voss, Henning U; Naguib, Marc; Tchernichovski, Ofer
2014-02-01
Songbirds spend much of their time learning, producing, and listening to complex vocal sequences we call songs. Songs are learned via cultural transmission, and singing, usually by males, has a strong impact on the behavioral state of the listeners, often promoting affiliation, pair bonding, or aggression. What is it in the acoustic structure of birdsong that makes it such a potent stimulus? We suggest that birdsong potency might be driven by principles similar to those that make music so effective in inducing emotional responses in humans: a combination of rhythms and pitches-and the transitions between acoustic states-affecting emotions through creating expectations, anticipations, tension, tension release, or surprise. Here we propose a framework for investigating how birdsong, like human music, employs the above "musical" features to affect the emotions of avian listeners. First we analyze songs of thrush nightingales (Luscinia luscinia) by examining their trajectories in terms of transitions in rhythm and pitch. These transitions show gradual escalations and graceful modifications, which are comparable to some aspects of human musicality. We then explore the feasibility of stripping such putative musical features from the songs and testing how this might affect patterns of auditory responses, focusing on fMRI data in songbirds that demonstrate the feasibility of such approaches. Finally, we explore ideas for investigating whether musical features of birdsong activate avian brains and affect avian behavior in manners comparable to music's effects on humans. In conclusion, we suggest that birdsong research would benefit from current advances in music theory by attempting to identify structures that are designed to elicit listeners' emotions and then testing for such effects experimentally. Birdsong research that takes into account the striking complexity of song structure in light of its more immediate function - to affect behavioral state in listeners - could provide a useful animal model for studying basic principles of music neuroscience in a system that is very accessible for investigation, and where developmental auditory and social experience can be tightly controlled. Copyright © 2013 Elsevier B.V. All rights reserved.
The music of morality and logic.
Mesz, Bruno; Rodriguez Zivic, Pablo H; Cecchi, Guillermo A; Sigman, Mariano; Trevisan, Marcos A
2015-01-01
Musical theory has built on the premise that musical structures can refer to something different from themselves (Nattiez and Abbate, 1990). The aim of this work is to statistically corroborate the intuitions of musical thinkers and practitioners starting at least with Plato, that music can express complex human concepts beyond merely "happy" and "sad" (Mattheson and Lenneberg, 1958). To do so, we ask whether musical improvisations can be used to classify the semantic category of the word that triggers them. We investigated two specific domains of semantics: morality and logic. While morality has been historically associated with music, logic concepts, which involve more abstract forms of thought, are more rarely associated with music. We examined musical improvisations inspired by positive and negative morality (e.g., good and evil) and logic concepts (true and false), analyzing the associations between these words and their musical representations in terms of acoustic and perceptual features. We found that music conveys information about valence (good and true vs. evil and false) with remarkable consistency across individuals. This information is carried by several musical dimensions which act in synergy to achieve very high classification accuracy. Positive concepts are represented by music with more ordered pitch structure and lower harmonic and sensorial dissonance than negative concepts. Music also conveys information indicating whether the word which triggered it belongs to the domains of logic or morality (true vs. good), principally through musical articulation. In summary, improvisations consistently map logic and morality information to specific musical dimensions, testifying the capacity of music to accurately convey semantic information in domains related to abstract forms of thought.
The music of morality and logic
Mesz, Bruno; Rodriguez Zivic, Pablo H.; Cecchi, Guillermo A.; Sigman, Mariano; Trevisan, Marcos A.
2015-01-01
Musical theory has built on the premise that musical structures can refer to something different from themselves (Nattiez and Abbate, 1990). The aim of this work is to statistically corroborate the intuitions of musical thinkers and practitioners starting at least with Plato, that music can express complex human concepts beyond merely “happy” and “sad” (Mattheson and Lenneberg, 1958). To do so, we ask whether musical improvisations can be used to classify the semantic category of the word that triggers them. We investigated two specific domains of semantics: morality and logic. While morality has been historically associated with music, logic concepts, which involve more abstract forms of thought, are more rarely associated with music. We examined musical improvisations inspired by positive and negative morality (e.g., good and evil) and logic concepts (true and false), analyzing the associations between these words and their musical representations in terms of acoustic and perceptual features. We found that music conveys information about valence (good and true vs. evil and false) with remarkable consistency across individuals. This information is carried by several musical dimensions which act in synergy to achieve very high classification accuracy. Positive concepts are represented by music with more ordered pitch structure and lower harmonic and sensorial dissonance than negative concepts. Music also conveys information indicating whether the word which triggered it belongs to the domains of logic or morality (true vs. good), principally through musical articulation. In summary, improvisations consistently map logic and morality information to specific musical dimensions, testifying the capacity of music to accurately convey semantic information in domains related to abstract forms of thought. PMID:26191020
Spontaneous sensorimotor coupling with multipart music.
Hurley, Brian K; Martens, Peter A; Janata, Petr
2014-08-01
Music often evokes spontaneous movements in listeners that are synchronized with the music, a phenomenon that has been characterized as being in "the groove." However, the musical factors that contribute to listeners' initiation of stimulus-coupled action remain unclear. Evidence suggests that newly appearing objects in auditory scenes orient listeners' attention, and that in multipart music, newly appearing instrument or voice parts can engage listeners' attention and elicit arousal. We posit that attentional engagement with music can influence listeners' spontaneous stimulus-coupled movement. Here, 2 experiments-involving participants with and without musical training-tested the effect of staggering instrument entrances across time and varying the number of concurrent instrument parts within novel multipart music on listeners' engagement with the music, as assessed by spontaneous sensorimotor behavior and self-reports. Experiment 1 assessed listeners' moment-to-moment ratings of perceived groove, and Experiment 2 examined their spontaneous tapping and head movements. We found that, for both musically trained and untrained participants, music with more instruments led to higher ratings of perceived groove, and that music with staggered instrument entrances elicited both increased sensorimotor coupling and increased reports of perceived groove. Although untrained participants were more likely to rate music as higher in groove, trained participants showed greater propensity for tapping along, and they did so more accurately. The quality of synchronization of head movements with the music, however, did not differ as a function of training. Our results shed new light on the relationship between complex musical scenes, attention, and spontaneous sensorimotor behavior.
Music Tune Restoration Based on a Mother Wavelet Construction
NASA Astrophysics Data System (ADS)
Fadeev, A. S.; Konovalov, V. I.; Butakova, T. I.; Sobetsky, A. V.
2017-01-01
It is offered to use the mother wavelet function obtained from the local part of an analyzed music signal. Requirements for the constructed function are proposed and the implementation technique and its properties are described. The suggested approach allows construction of mother wavelet families with specified identifying properties. Consequently, this makes possible to identify the basic signal variations of complex music signals including local time-frequency characteristics of the basic one.
The Discrimination of Pitch in Pulse Trains and Speech
1984-04-12
music from their corresponding abilities with the simpler sounds which could serve as the •’• components of the more complex ones (p. 175). Two...reported normal hearing. Previous researchers have found that, on tests of tone or melody-sequence perception, musically trained subjects perform better...than individuals who have not had musical training (e.g. Stucker, 1980; Raz and Brandt, 1977; Zatorre, 1979). For this reason, subjects were asked to
Angular Superresolution for a Scanning Antenna with Simulated Complex Scatterer-Type Targets
2002-05-01
Approved for public release; distribution unlimited. The Scan- MUSIC (MUltiple SIgnal Classification), or SMUSIC, algorithm was developed by the Millimeter...with the use of a single rotatable sensor scanning in an angular region of interest. This algorithm has been adapted and extended from the MUSIC ...simulation. Abstract ii iii Contents 1. Introduction 1 2. Extension of the MUSIC Algorithm for Scanning Antenna 2 2.1 Subvector Averaging Method
2016-07-01
music of varying complexities. We did observe improvement from the first to the last lesson and the subject expressed appreciation for the training...hearing threshold data. C. Collect pre- and post-operative speech perception data. D. Collect music appraisal and pitch data. E. Administer training...localization, and music data. We are also collecting quality of life and functional questionnaire data. In Figure 2, we show post-operative speech
Ilari, Beatriz S.; Keller, Patrick; Damasio, Hanna; Habibi, Assal
2016-01-01
Developmental research in music has typically centered on the study of single musical skills (e.g., singing, listening) and has been conducted with middle class children who learn music in schools and conservatories. Information on the musical development of children from different social strata, who are enrolled in community-based music programs, remains elusive. This study examined the development of musical skills in underprivileged children who were attending an El Sistema-inspired program in Los Angeles. We investigated how children, predominantly of Latino ethnicity, developed musically with respect to the following musical skills – pitch and rhythmic discrimination, pitch matching, singing a song from memory, and rhythmic entrainment – over the course of 1 year. Results suggested that participation in an El Sistema-inspired program affects children’s musical development in distinct ways; with pitch perception and production skills developing faster than rhythmic skills. Furthermore, children from the same ethnic and social background, who did not participate in the El Sistema-inspired music program, showed a decline in singing and pitch discrimination skills over the course of 1 year. Taken together, these results are consistent with the idea of musical development as a complex, spiraling and recursive process that is influenced by several factors including type of musical training. Implications for future research are outlined. PMID:26869964
Ilari, Beatriz S; Keller, Patrick; Damasio, Hanna; Habibi, Assal
2016-01-01
Developmental research in music has typically centered on the study of single musical skills (e.g., singing, listening) and has been conducted with middle class children who learn music in schools and conservatories. Information on the musical development of children from different social strata, who are enrolled in community-based music programs, remains elusive. This study examined the development of musical skills in underprivileged children who were attending an El Sistema-inspired program in Los Angeles. We investigated how children, predominantly of Latino ethnicity, developed musically with respect to the following musical skills - pitch and rhythmic discrimination, pitch matching, singing a song from memory, and rhythmic entrainment - over the course of 1 year. Results suggested that participation in an El Sistema-inspired program affects children's musical development in distinct ways; with pitch perception and production skills developing faster than rhythmic skills. Furthermore, children from the same ethnic and social background, who did not participate in the El Sistema-inspired music program, showed a decline in singing and pitch discrimination skills over the course of 1 year. Taken together, these results are consistent with the idea of musical development as a complex, spiraling and recursive process that is influenced by several factors including type of musical training. Implications for future research are outlined.
Boso, Marianna; Emanuele, Enzo; Minazzi, Vera; Abbamonte, Marta; Politi, Pierluigi
2007-09-01
Data on the potential behavioral effects of music therapy in autism are scarce. The aim of this study was to investigate whether a musical training program based on interactive music therapy sessions could enhance the behavioral profile and the musical skills of young adults affected by severe autism. Young adults (N = 8) with severe (Childhood Autism Rating Scale >30) autism took part in a total of 52 weekly active music therapy sessions lasting 60 minutes. Each session consisted of a wide range of different musical activities including singing, piano playing, and drumming. Clinical rating scales included the Clinical Global Impression (CGI) scale and the Brief Psychiatric Rating Scale (BPRS). Musical skills-including singing a short or long melody, playing the C scale on a keyboard, music absorption, rhythm reproduction, and execution of complex rhythmic patterns-were rated on a 5-point Likert-type scale ranging from "completely/entirely absent" to "completely/entirely present." At the end of the 52-week training period, significant improvements were found on both the CGI and BPRS scales. Similarly, the patients' musical skills significantly ameliorated as compared to baseline ratings. Our pilot data seem to suggest that active music therapy sessions could be of aid in improving autistic symptoms, as well as personal musical skills in young adults with severe autism.
Unlimited multistability in multisite phosphorylation systems.
Thomson, Matthew; Gunawardena, Jeremy
2009-07-09
Reversible phosphorylation on serine, threonine and tyrosine is the most widely studied posttranslational modification of proteins. The number of phosphorylated sites on a protein (n) shows a significant increase from prokaryotes, with n = 7 sites, to eukaryotes, with examples having n >/= 150 sites. Multisite phosphorylation has many roles and site conservation indicates that increasing numbers of sites cannot be due merely to promiscuous phosphorylation. A substrate with n sites has an exponential number (2(n)) of phospho-forms and individual phospho-forms may have distinct biological effects. The distribution of these phospho-forms and how this distribution is regulated have remained unknown. Here we show that, when kinase and phosphatase act in opposition on a multisite substrate, the system can exhibit distinct stable phospho-form distributions at steady state and that the maximum number of such distributions increases with n. Whereas some stable distributions are focused on a single phospho-form, others are more diffuse, giving the phospho-proteome the potential to behave as a fluid regulatory network able to encode information and flexibly respond to varying demands. Such plasticity may underlie complex information processing in eukaryotic cells and suggests a functional advantage in having many sites. Our results follow from the unusual geometry of the steady-state phospho-form concentrations, which we show to constitute a rational algebraic curve, irrespective of n. We thereby reduce the complexity of calculating steady states from simulating 3 x 2(n) differential equations to solving two algebraic equations, while treating parameters symbolically. We anticipate that these methods can be extended to systems with multiple substrates and multiple enzymes catalysing different modifications, as found in posttranslational modification 'codes' such as the histone code. Whereas simulations struggle with exponentially increasing molecular complexity, mathematical methods of the kind developed here can provide a new language in which to articulate the principles of cellular information processing.
Vibrotactile Discrimination of Musical Timbre
ERIC Educational Resources Information Center
Russo, Frank A.; Ammirante, Paolo; Fels, Deborah I.
2012-01-01
Five experiments investigated the ability to discriminate between musical timbres based on vibrotactile stimulation alone. Participants made same/different judgments on pairs of complex waveforms presented sequentially to the back through voice coils embedded in a conforming chair. Discrimination between cello, piano, and trombone tones matched…
Music in film and animation: experimental semiotics applied to visual, sound and musical structures
NASA Astrophysics Data System (ADS)
Kendall, Roger A.
2010-02-01
The relationship of music to film has only recently received the attention of experimental psychologists and quantificational musicologists. This paper outlines theory, semiotical analysis, and experimental results using relations among variables of temporally organized visuals and music. 1. A comparison and contrast is developed among the ideas in semiotics and experimental research, including historical and recent developments. 2. Musicological Exploration: The resulting multidimensional structures of associative meanings, iconic meanings, and embodied meanings are applied to the analysis and interpretation of a range of film with music. 3. Experimental Verification: A series of experiments testing the perceptual fit of musical and visual patterns layered together in animations determined goodness of fit between all pattern combinations, results of which confirmed aspects of the theory. However, exceptions were found when the complexity of the stratified stimuli resulted in cognitive overload.
[Neuroarchitecture of musical emotions].
Sel, Alejandra; Calvo-Merino, Beatriz
2013-03-01
The emotional response to music, or musical emotion, is a universal response that draws on diverse psychological processes implemented in a large array of neural structures and mechanisms. Studies using electroencephalography, functional magnetic resonance, lesions and individuals with extent musical training have begun to elucidate some of these mechanisms. The objective of this article is reviewing the most relevant studies that have tried to identify the neural correlates of musical emotion from the more automatic to the more complex processes, and to understand how these correlates interact in the brain. The article describes how the presentation of music perceived as emotional is associated with a rapid autonomic response in thalamic and subthalamic structures, accompanied by changes in the electrodermal and endocrine responses. It also explains how musical emotion processing activates auditory cortex, as well as a series of limbic and paralimbic structures, such as the amygdala, the anterior cingulate cortex or the hippocampus, demonstrating the relevant contribution of the limbic system to musical emotion. Further, it is detailed how musical emotion depends to a great extent on semantic and syntactic process carried out in temporal and parietofrontal areas, respectively. Some of the recent works demonstrating that musical emotion highly relies on emotional simulation are also mentioned. Finally, a summary of these studies, their limitations, and suggestions for further research on the neuroarchitecture of musical emotion are given.
Moore, Kimberly Sena
2013-01-01
Emotion regulation (ER) is an internal process through which a person maintains a comfortable state of arousal by modulating one or more aspects of emotion. The neural correlates underlying ER suggest an interplay between cognitive control areas and areas involved in emotional reactivity. Although some studies have suggested that music may be a useful tool in ER, few studies have examined the links between music perception/production and the neural mechanisms that underlie ER and resulting implications for clinical music therapy treatment. Objectives of this systematic review were to explore and synthesize what is known about how music and music experiences impact neural structures implicated in ER, and to consider clinical implications of these findings for structuring music stimuli to facilitate ER. A comprehensive electronic database search resulted in 50 studies that met predetermined inclusion and exclusion criteria. Pertinent data related to the objective were extracted and study outcomes were analyzed and compared for trends and common findings. Results indicated there are certain music characteristics and experiences that produce desired and undesired neural activation patterns implicated in ER. Desired activation patterns occurred when listening to preferred and familiar music, when singing, and (in musicians) when improvising; undesired activation patterns arose when introducing complexity, dissonance, and unexpected musical events. Furthermore, the connection between music-influenced changes in attention and its link to ER was explored. Implications for music therapy practice are discussed and preliminary guidelines for how to use music to facilitate ER are shared.
Neural responses to sounds presented on and off the beat of ecologically valid music
Tierney, Adam; Kraus, Nina
2013-01-01
The tracking of rhythmic structure is a vital component of speech and music perception. It is known that sequences of identical sounds can give rise to the percept of alternating strong and weak sounds, and that this percept is linked to enhanced cortical and oscillatory responses. The neural correlates of the perception of rhythm elicited by ecologically valid, complex stimuli, however, remain unexplored. Here we report the effects of a stimulus' alignment with the beat on the brain's processing of sound. Human subjects listened to short popular music pieces while simultaneously hearing a target sound. Cortical and brainstem electrophysiological onset responses to the sound were enhanced when it was presented on the beat of the music, as opposed to shifted away from it. Moreover, the size of the effect of alignment with the beat on the cortical response correlated strongly with the ability to tap to a beat, suggesting that the ability to synchronize to the beat of simple isochronous stimuli and the ability to track the beat of complex, ecologically valid stimuli may rely on overlapping neural resources. These results suggest that the perception of musical rhythm may have robust effects on processing throughout the auditory system. PMID:23717268
Schoeb, Veronika; Zosso, Amélie
2012-09-01
To identify professional musicians' representation of health and illness and to identify its perceived impact on musical performance. A total of 11 professional musicians participated in this phenomenological study. Five of the musicians were healthy, and the others suffered debilitating physical health problems caused by playing their instruments. Semi-structured interviews were conducted, transcribed verbatim and analysed. Thematic analysis, including a six-step coding process, was performed (ATLAS-ti 6). Three major themes emerged from the data: music as art, the health of musicians, and learning through experience. The first theme, music as art, was discussed by both groups; they talked about such things as passion, joy, sense of identity, sensitivity, and a musician's hard life. Discussions of the second theme, the health of musicians, revealed a complex link between health and performance, including the dramatic impact of potential or actual health problems on musical careers. Not surprisingly, musicians with health problems were more concerned with dysfunctional body parts (mostly the hand), whereas healthy musicians focused on maintaining the health of the entire person. The third theme, learning through experience, focused on the dynamic nature of health and included the life-long learning approach, not only in terms of using the body in musical performance but also in daily life. The centre of a musician's life is making music in which the body plays an important part. Participants in this study evidenced a complex link between health and musical performance, and maintaining health was perceived by these musicians as a dynamic balance. Our results suggest that learning through experience might help musicians adapt to changes related to their bodies.
Melodic Contour Identification and Music Perception by Cochlear Implant Users
Galvin, John J.; Fu, Qian-Jie; Shannon, Robert V.
2013-01-01
Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal-hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically-experienced CI users often performed as well as NH listeners, and MCI training in less experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as it occurs with hearing loss or an auditory prosthesis, training and experience can provide a considerable improvements in music perception and appreciation. PMID:19673835
Microbial bebop: creating music from complex dynamics in microbial ecology.
Larsen, Peter; Gilbert, Jack
2013-01-01
In order for society to make effective policy decisions on complex and far-reaching subjects, such as appropriate responses to global climate change, scientists must effectively communicate complex results to the non-scientifically specialized public. However, there are few ways however to transform highly complicated scientific data into formats that are engaging to the general community. Taking inspiration from patterns observed in nature and from some of the principles of jazz bebop improvisation, we have generated Microbial Bebop, a method by which microbial environmental data are transformed into music. Microbial Bebop uses meter, pitch, duration, and harmony to highlight the relationships between multiple data types in complex biological datasets. We use a comprehensive microbial ecology, time course dataset collected at the L4 marine monitoring station in the Western English Channel as an example of microbial ecological data that can be transformed into music. Four compositions were generated (www.bio.anl.gov/MicrobialBebop.htm.) from L4 Station data using Microbial Bebop. Each composition, though deriving from the same dataset, is created to highlight different relationships between environmental conditions and microbial community structure. The approach presented here can be applied to a wide variety of complex biological datasets.
Infants prefer the musical meter of their own culture: a cross-cultural comparison.
Soley, Gaye; Hannon, Erin E
2010-01-01
Infants prefer native structures such as familiar faces and languages. Music is a universal human activity containing structures that vary cross-culturally. For example, Western music has temporally regular metric structures, whereas music of the Balkans (e.g., Bulgaria, Macedonia, Turkey) can have both regular and irregular structures. We presented 4- to 8-month-old American and Turkish infants with contrasting melodies to determine whether cultural background would influence their preferences for musical meter. In Experiment 1, American infants preferred Western over Balkan meter, whereas Turkish infants, who were familiar with both Western and Balkan meters, exhibited no preference. Experiments 2 and 3 presented infants with either a Western or Balkan meter paired with an arbitrary rhythm with complex ratios not common to any musical culture. Both Turkish and American infants preferred Western and Balkan meter to an arbitrary meter. Infants' musical preferences appear to be driven by culture-specific experience and a culture-general preference for simplicity. Copyright 2009 APA, all rights reserved.
What Does Music Sound Like for a Cochlear Implant User?
Jiam, Nicole T; Caldwell, Meredith T; Limb, Charles J
2017-09-01
Cochlear implant research and product development over the past 40 years have been heavily focused on speech comprehension with little emphasis on music listening and enjoyment. The relatively little understanding of how music sounds in a cochlear implant user stands in stark contrast to the overall degree of importance the public places on music and quality of life. The purpose of this article is to describe what music sounds like to cochlear implant users, using a combination of existing research studies and listener descriptions. We examined the published literature on music perception in cochlear implant users, particularly postlingual cochlear implant users, with an emphasis on the primary elements of music and recorded music. Additionally, we administered an informal survey to cochlear implant users to gather first-hand descriptions of music listening experience and satisfaction from the cochlear implant population. Limitations in cochlear implant technology lead to a music listening experience that is significantly distorted compared with that of normal hearing listeners. On the basis of many studies and sources, we describe how music is frequently perceived as out-of-tune, dissonant, indistinct, emotionless, and weak in bass frequencies, especially for postlingual cochlear implant users-which may in part explain why music enjoyment and participation levels are lower after implantation. Additionally, cochlear implant users report difficulty in specific musical contexts based on factors including but not limited to genre, presence of lyrics, timbres (woodwinds, brass, instrument families), and complexity of the perceived music. Future research and cochlear implant development should target these areas as parameters for improvement in cochlear implant-mediated music perception.
Masking effects of speech and music: does the masker's hierarchical structure matter?
Shi, Lu-Feng; Law, Yvonne
2010-04-01
Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.
Epilepsy and music: practical notes.
Maguire, M
2017-04-01
Music processing occurs via a complex network of activity far beyond the auditory cortices. This network may become sensitised to music or may be recruited as part of a temporal lobe seizure, manifesting as either musicogenic epilepsy or ictal musical phenomena. The idea that sound waves may directly affect brain waves has led researchers to explore music as therapy for epilepsy. There is limited and low quality evidence of an antiepileptic effect with the Mozart Sonata K.448. We do not have a pathophysiological explanation for the apparent dichotomous effect of music on seizures. However, clinicians should consider musicality when treating patients with antiepileptic medication or preparing patients for epilepsy surgery. Carbamazepine and oxcarbazepine each may cause a reversible altered appreciation of pitch. Surgical cohort studies suggest that musical memory and perception may be affected, particularly following right temporal lobe surgery, and discussion of this risk should form part of presurgical counselling. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Musical hallucinosis: case reports and possible neurobiological models.
Mocellin, Ramon; Walterfang, Mark; Velakoulis, Dennis
2008-04-01
The perception of music without a stimulus, or musical hallucination, is reported in both organic and psychiatric disorders. It is most frequently described in the elderly with associated hearing loss and accompanied by some degree of insight. In this setting it is often referred to as 'musical hallucinosis'. The aim of the authors was to present examples of this syndrome and review the current understanding of its neurobiological basis. We describe three cases of persons experiencing musical hallucinosis in the context of hearing deficits with varying degrees of associated central nervous system abnormalities. Putative neurobiological mechanisms, in particular those involving de-afferentation of a complex auditory recognition system by complete or partial deafness, are discussed in the light of current information from the literature. Musical hallucinosis can be experienced in those patients with hearing impairment and is phenomenologically distinct for hallucinations described in psychiatric disorders.
Music and Autonomic Nervous System (Dys)function
Ellis, Robert J.; Thayer, Julian F.
2010-01-01
Despite a wealth of evidence for the involvement of the autonomic nervous system (ANS) in health and disease and the ability of music to affect ANS activity, few studies have systematically explored the therapeutic effects of music on ANS dysfunction. Furthermore, when ANS activity is quantified and analyzed, it is usually from a point of convenience rather than from an understanding of its physiological basis. After a review of the experimental and therapeutic literatures exploring music and the ANS, a “Neurovisceral Integration” perspective on the interplay between the central and autonomic nervous systems is introduced, and the associated implications for physiological, emotional, and cognitive health are explored. The construct of heart rate variability is discussed both as an example of this complex interplay and as a useful metric for exploring the sometimes subtle effect of music on autonomic response. Suggestions for future investigations using musical interventions are offered based on this integrative account. PMID:21197136
TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization
NASA Astrophysics Data System (ADS)
Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi
2011-12-01
We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.
Music and emotions: from enchantment to entrainment.
Vuilleumier, Patrik; Trost, Wiebke
2015-03-01
Producing and perceiving music engage a wide range of sensorimotor, cognitive, and emotional processes. Emotions are a central feature of the enjoyment of music, with a large variety of affective states consistently reported by people while listening to music. However, besides joy or sadness, music often elicits feelings of wonder, nostalgia, or tenderness, which do not correspond to emotion categories typically studied in neuroscience and whose neural substrates remain largely unknown. Here we review the similarities and differences in the neural substrates underlying these "complex" music-evoked emotions relative to other more "basic" emotional experiences. We suggest that these emotions emerge through a combination of activation in emotional and motivational brain systems (e.g., including reward pathways) that confer its valence to music, with activation in several other areas outside emotional systems, including motor, attention, or memory-related regions. We then discuss the neural substrates underlying the entrainment of cognitive and motor processes by music and their relation to affective experience. These effects have important implications for the potential therapeutic use of music in neurological or psychiatric diseases, particularly those associated with motor, attention, or affective disturbances. © 2015 New York Academy of Sciences.
An fMRI investigation of the cultural specificity of music memory.
Demorest, Steven M; Morrison, Steven J; Stambaugh, Laura A; Beken, Münir; Richards, Todd L; Johnson, Clark
2010-06-01
This study explored the role of culture in shaping music perception and memory. We tested the hypothesis that listeners demonstrate different patterns of activation associated with music processing-particularly right frontal cortex-when encoding and retrieving culturally familiar and unfamiliar stimuli, with the latter evoking broader activation consistent with more complex memory tasks. Subjects (n = 16) were right-handed adults born and raised in the USA (n = 8) or Turkey (n = 8) with minimal music training. Using fMRI procedures, we scanned subjects during two tasks: (i) listening to novel musical examples from their own culture and an unfamiliar culture and (ii) identifying which among a series of brief excerpts were taken from the longer examples. Both groups were more successful remembering music of their home culture. We found greater activation for culturally unfamiliar music listening in the left cerebellar region, right angular gyrus, posterior precuneus and right middle frontal area extending into the inferior frontal cortex. Subjects demonstrated greater activation in the cingulate gyrus and right lingual gyrus when engaged in recall of culturally unfamiliar music. This study provides evidence for the influence of culture on music perception and memory performance at both a behavioral and neurological level.
The role of emotion in musical improvisation: an analysis of structural features.
McPherson, Malinda J; Lopez-Gonzalez, Monica; Rankin, Summer K; Limb, Charles J
2014-01-01
One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.
The Role of Emotion in Musical Improvisation: An Analysis of Structural Features
McPherson, Malinda J.; Lopez-Gonzalez, Monica; Rankin, Summer K.; Limb, Charles J.
2014-01-01
One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion. PMID:25144200
An fMRI investigation of the cultural specificity of music memory
Morrison, Steven J.; Stambaugh, Laura A.; Beken, Münir; Richards, Todd L.; Johnson, Clark
2010-01-01
This study explored the role of culture in shaping music perception and memory. We tested the hypothesis that listeners demonstrate different patterns of activation associated with music processing—particularly right frontal cortex—when encoding and retrieving culturally familiar and unfamiliar stimuli, with the latter evoking broader activation consistent with more complex memory tasks. Subjects (n = 16) were right-handed adults born and raised in the USA (n = 8) or Turkey (n = 8) with minimal music training. Using fMRI procedures, we scanned subjects during two tasks: (i) listening to novel musical examples from their own culture and an unfamiliar culture and (ii) identifying which among a series of brief excerpts were taken from the longer examples. Both groups were more successful remembering music of their home culture. We found greater activation for culturally unfamiliar music listening in the left cerebellar region, right angular gyrus, posterior precuneus and right middle frontal area extending into the inferior frontal cortex. Subjects demonstrated greater activation in the cingulate gyrus and right lingual gyrus when engaged in recall of culturally unfamiliar music. This study provides evidence for the influence of culture on music perception and memory performance at both a behavioral and neurological level. PMID:20035018
The evolution of music in comparative perspective.
Fitch, W Tecumseh
2005-12-01
In this paper, I briefly review some comparative data that provide an empirical basis for research on the evolution of music making in humans. First, a brief comparison of music and language leads to discussion of design features of music, suggesting a deep connection between the biology of music and language. I then selectively review data on animal "music." Examining sound production in animals, we find examples of repeated convergent evolution or analogy (the evolution of vocal learning of complex songs in birds, whales, and seals). A fascinating but overlooked potential homology to instrumental music is provided by manual percussion in African apes. Such comparative behavioral data, combined with neuroscientific and developmental data, provide an important starting point for any hypothesis about how or why human music evolved. Regarding these functional and phylogenetic questions, I discuss some previously proposed functions of music, including Pinker's "cheesecake" hypothesis; Darwin's and others' sexual selection model; Dunbar's group "grooming" hypothesis; and Trehub's caregiving model. I conclude that only the last hypothesis receives strong support from currently available data. I end with a brief synopsis of Darwin's model of a songlike musical "protolanguage," concluding that Darwin's model is consistent with much of the available evidence concerning the evolution of both music and language. There is a rich future for empirical investigations of the evolution of music, both in investigations of individual differences among humans, and in interspecific investigations of musical abilities in other animals, especially those of our ape cousins, about which we know little.
A change management perspective on the introduction of music therapy to interprofessional teams.
Ledger, Alison; Edwards, Jane; Morley, Michael
2013-01-01
The purpose of this paper is to demonstrate how a change management perspective contributes new understandings about music therapy implementation processes. Narrative inquiry, ethnography, and arts-based research methods were used to explore the experiences of 12 music therapists who developed new services in healthcare settings. These experiences were interpreted using insights from the field of change management. A change management perspective helps to explain music therapists' experiences of resistance and struggle when introducing their services to established health care teams. Organisational change theories and models highlight possible strategies for implementing music therapy services successfully, such as organisational assessment, communication and collaboration with other workers, and the appointment of a service development steering group. This paper offers exciting possibilities for developing understanding of music therapists' experiences and for supporting the growth of this burgeoning profession. There is an important need for professional supervision for music therapists in the service development phase, to support them in coping with resistance and setbacks. Healthcare managers and workers are encouraged to consider ways in which they can support the development of a new music therapy service, such as observing music therapy work and sharing organisational priorities and cultures with a new music therapist. Previous accounts of music therapy service development have indicated that music therapists encounter complex interprofessional issues when they join an established health care team. A change management perspective offers a new lens through which music therapists' experiences can be further understood.
Yuskaitis, Christopher J.; Parviz, Mahsa; Loui, Psyche; Wan, Catherine Y.; Pearl, Phillip L.
2017-01-01
Music production and perception invoke a complex set of cognitive functions that rely on the integration of sensory-motor, cognitive, and emotional pathways. Pitch is a fundamental perceptual attribute of sound and a building block for both music and speech. Although the cerebral processing of pitch is not completely understood, recent advances in imaging and electrophysiology have provided insight into the functional and anatomical pathways of pitch processing. This review examines the current understanding of pitch processing, behavioral and neural variations that give rise to difficulties in pitch processing, and potential applications of music education for language processing disorders such as dyslexia. PMID:26092314
Sturm, Irene; Blankertz, Benjamin; Potes, Cristhian; Schalk, Gerwin; Curio, Gabriel
2014-01-01
Listening to music moves our minds and moods, stirring interest in its neural underpinnings. A multitude of compositional features drives the appeal of natural music. How such original music, where a composer's opus is not manipulated for experimental purposes, engages a listener's brain has not been studied until recently. Here, we report an in-depth analysis of two electrocorticographic (ECoG) data sets obtained over the left hemisphere in ten patients during presentation of either a rock song or a read-out narrative. First, the time courses of five acoustic features (intensity, presence/absence of vocals with lyrics, spectral centroid, harmonic change, and pulse clarity) were extracted from the audio tracks and found to be correlated with each other to varying degrees. In a second step, we uncovered the specific impact of each musical feature on ECoG high-gamma power (70-170 Hz) by calculating partial correlations to remove the influence of the other four features. In the music condition, the onset and offset of vocal lyrics in ongoing instrumental music was consistently identified within the group as the dominant driver for ECoG high-gamma power changes over temporal auditory areas, while concurrently subject-individual activation spots were identified for sound intensity, timbral, and harmonic features. The distinct cortical activations to vocal speech-related content embedded in instrumental music directly demonstrate that song integrated in instrumental music represents a distinct dimension in complex music. In contrast, in the speech condition, the full sound envelope was reflected in the high gamma response rather than the onset or offset of the vocal lyrics. This demonstrates how the contributions of stimulus features that modulate the brain response differ across the two examples of a full-length natural stimulus, which suggests a context-dependent feature selection in the processing of complex auditory stimuli.
Waid, Jeffrey; Wojciak, Armeda Stevenson
2017-10-01
Sibling relationships in foster care settings have received increased attention in recent years. Despite growing evidence regarding the protective potential of sibling relationships for youth in care, some sibling groups continue to experience foster care related separation, and few programs exist to address the needs of these youth. This study describes and evaluates Camp To Belong, a multi-site program designed to provide short-term reunification to separated sibling groups through a week-long summer camp experience. Using a pre-test post-test survey design, this paper examines changes in youth ratings of sibling conflict and sibling support across camps located in six geographically distinct regions of the United States. The effects of youth age, number of prior camp exposures, and camp location were tested using multilevel modeling procedures. Findings suggest that participation in Camp To Belong may reduce sibling conflict, and improvements in sibling support are noted for youth who have had prior exposure to the camp's programming. Camp-level variance in the sibling support outcome highlight the complex nature of relationships for siblings separated by foster care, and suggest the need for additional research. Lessons learned from this multi-site evaluation and future directions are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
The effects of learning on event-related potential correlates of musical expectancy.
Carrión, Ricardo E; Bly, Benjamin Martin
2008-09-01
Musical processing studies have shown that unexpected endings in familiar musical sequences produce extended latencies of the P300 component. The present study sought to identify event-related potential (ERP) correlates of musical expectancy by entraining participants with rule-governed chord sequences and testing whether unexpected endings created similar responses. Two experiments were conducted in which participants performed grammaticality classifications without training (Experiment 1) and with training (Experiment 2). In both experiments, deviant chords differing in instrumental timbre elicited a MMN/P3a waveform complex. Violations related to learned patterns elicited an early right anterior negativity and P3b. Latency and amplitude of peak components were modulated by the physical characteristics of the chords, expectations due to prior knowledge of musical harmony, and contextually defined expectations developed through entrainment.
What can music tell us about social interaction?
D'Ausilio, Alessandro; Novembre, Giacomo; Fadiga, Luciano; Keller, Peter E
2015-03-01
Humans are innately social creatures, but cognitive neuroscience, that has traditionally focused on individual brains, is only now beginning to investigate social cognition through realistic interpersonal interaction. Music provides an ideal domain for doing so because it offers a promising solution for balancing the trade-off between ecological validity and experimental control when testing cognitive and brain functions. Musical ensembles constitute a microcosm that provides a platform for parametrically modeling the complexity of human social interaction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Towards a Dynamic Model of Skills Involved in Sight Reading Music
ERIC Educational Resources Information Center
Kopiez, Reinhard; Lee, Ji In
2006-01-01
This study investigates the relationship between selected predictors of achievement in playing unrehearsed music (sight reading) and the changing complexity of sight reading tasks. The question under investigation is, how different variables gain or lose significance as sight reading stimuli become more difficult. Fifty-two piano major graduates…
Music in Beginning Teacher Classrooms: A Mismatch between Policy, Philosophy, and Practice
ERIC Educational Resources Information Center
Webb, Linda
2016-01-01
This paper identifies a range of positions and perspectives that impacted on New Zealand beginning primary (elementary) generalist teacher's preparedness to teach music in relation to: government policy, curriculum and Graduating Teacher Standards requirements; and teacher educators' and school principals' expectations of them. The complex web of…
Assessing Complexity. Group Composing for a Secondary School Qualifcation
ERIC Educational Resources Information Center
Thorpe, Vicki
2017-01-01
This article examines a unique music curriculum and assessment environment through the fndings of a practical action research project carried out in secondary schools. I address two current international educational issues: the relationship between formal and informal learning in music, and how individuals' contributions in collaborative groups…
Neupane, S; Virtanen, P; Leino-Arjas, P; Miranda, H; Siukola, A; Nygård, C-H
2013-03-01
We investigated the separate and joint effects of multi-site musculoskeletal pain and physical and psychosocial exposures at work on future work ability. A survey was conducted among employees of a Finnish food industry company in 2005 (n = 1201) and a follow-up survey in 2009 (n = 734). Information on self-assessed work ability (current work ability on a scale from 0 to 10; 7 = poor work ability), multi-site musculoskeletal pain (pain in at least two anatomical areas of four), leisure-time physical activity, body mass index and physical and psychosocial exposures was obtained by questionnaire. The separate and joint effects of multi-site pain and work exposures on work ability at follow-up, among subjects with good work ability at baseline, were assessed by logistic regression, and p-values for the interaction derived. Compared with subjects with neither multi-site pain nor adverse work exposure, multi-site pain at baseline increased the risk of poor work ability at follow-up, allowing for age, gender, occupational class, body mass index and leisure-time physical activity. The separate effects of the work exposures on work ability were somewhat smaller than those of multi-site pain. Multi-site pain had an interactive effect with work environment and awkward postures, such that no association of multi-site pain with poor work ability was seen when work environment was poor or awkward postures present. The decline in work ability connected with multi-site pain was not increased by exposure to adverse physical or psychosocial factors at work. © 2012 European Federation of International Association for the Study of Pain Chapters.
Influence of musical groove on postural sway.
Ross, Jessica M; Warlaumont, Anne S; Abney, Drew H; Rigoli, Lillian M; Balasubramaniam, Ramesh
2016-03-01
Timescales of postural fluctuation reflect underlying neuromuscular processes in balance control that are influenced by sensory information and the performance of concurrent cognitive and motor tasks. An open question is how postural fluctuations entrain to complex environmental rhythms, such as in music, which also vary on multiple timescales. Musical groove describes the property of music that encourages auditory-motor synchronization and is used to study voluntary motor entrainment to rhythmic sounds. The influence of groove on balance control mechanisms remains unexplored. We recorded fluctuations in center of pressure (CoP) of standing participants (N = 40) listening to low and high groove music and during quiet stance. We found an effect of musical groove on radial sway variability, with the least amount of variability in the high groove condition. In addition, we observed that groove influenced postural sway entrainment at various temporal scales. For example, with increasing levels of groove, we observed more entrainment to shorter, local timescale rhythmic musical occurrences. In contrast, we observed more entrainment to longer, global timescale features of the music, such as periodicity, with decreasing levels of groove. Finally, musical experience influenced the amount of postural variability and entrainment at local and global timescales. We conclude that groove in music and musical experience can influence the neural mechanisms that govern balance control, and discuss implications of our findings in terms of multiscale sensorimotor coupling. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bidelman, Gavin M.; Hutka, Stefanie; Moreno, Sylvain
2013-01-01
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language. PMID:23565267
PS2-06: Best Practices for Advancing Multi-site Chart Abstraction Research
Blick, Noelle; Cole, Deanna; King, Colleen; Riordan, Rick; Von Worley, Ann; Yarbro, Patty
2012-01-01
Background/Aims Multi-site chart abstraction studies are becoming increasingly common within the HMORN. Differences in systems among HMORN sites can pose significant obstacles to the success of these studies. It is therefore crucial to standardize abstraction activities by following best practices for multi-site chart abstraction, as consistency of processes across sites will increase efficiencies and enhance data quality. Methods Over the past few months the authors have been meeting to identify obstacles to multi-site chart abstraction and to address ways in which multi-site chart abstraction processes can be systemized and standardized. The aim of this workgroup is to create a best practice guide for multi-site chart abstraction studies. Focus areas include: abstractor training, format for chart abstraction (database, paper, etc), data quality, redaction, mechanism for transferring data, site specific access to medical records, IRB/HIPAA concerns, and budgetary issues. Results The results of the workgroup’s efforts (the best practice guide) will be presented by a panel of experts at the 2012 HMORN conference. The presentation format will also focus on discussion among attendees to elicit further input and to identify areas that need to be further addressed. Subsequently, the best practice guide will be posted on the HMORN website. Discussion The best practice guide for multi-site chart abstraction studies will establish sound guidelines and serve as an aid to researchers embarking on multi-site chart abstraction studies. Efficiencies and data quality will be further enhanced with standardized multi-site chart abstraction practices.
Musical Preferences are Linked to Cognitive Styles.
Greenberg, David M; Baron-Cohen, Simon; Stillwell, David J; Kosinski, Michal; Rentfrow, Peter J
2015-01-01
Why do we like the music we do? Research has shown that musical preferences and personality are linked, yet little is known about other influences on preferences such as cognitive styles. To address this gap, we investigated how individual differences in musical preferences are explained by the empathizing-systemizing (E-S) theory. Study 1 examined the links between empathy and musical preferences across four samples. By reporting their preferential reactions to musical stimuli, samples 1 and 2 (Ns = 2,178 and 891) indicated their preferences for music from 26 different genres, and samples 3 and 4 (Ns = 747 and 320) indicated their preferences for music from only a single genre (rock or jazz). Results across samples showed that empathy levels are linked to preferences even within genres and account for significant proportions of variance in preferences over and above personality traits for various music-preference dimensions. Study 2 (N = 353) replicated and extended these findings by investigating how musical preferences are differentiated by E-S cognitive styles (i.e., 'brain types'). Those who are type E (bias towards empathizing) preferred music on the Mellow dimension (R&B/soul, adult contemporary, soft rock genres) compared to type S (bias towards systemizing) who preferred music on the Intense dimension (punk, heavy metal, and hard rock). Analyses of fine-grained psychological and sonic attributes in the music revealed that type E individuals preferred music that featured low arousal (gentle, warm, and sensual attributes), negative valence (depressing and sad), and emotional depth (poetic, relaxing, and thoughtful), while type S preferred music that featured high arousal (strong, tense, and thrilling), and aspects of positive valence (animated) and cerebral depth (complexity). The application of these findings for clinicians, interventions, and those on the autism spectrum (largely type S or extreme type S) are discussed.
Musical Preferences are Linked to Cognitive Styles
Greenberg, David M.; Baron-Cohen, Simon; Stillwell, David J.; Kosinski, Michal; Rentfrow, Peter J.
2015-01-01
Why do we like the music we do? Research has shown that musical preferences and personality are linked, yet little is known about other influences on preferences such as cognitive styles. To address this gap, we investigated how individual differences in musical preferences are explained by the empathizing-systemizing (E-S) theory. Study 1 examined the links between empathy and musical preferences across four samples. By reporting their preferential reactions to musical stimuli, samples 1 and 2 (Ns = 2,178 and 891) indicated their preferences for music from 26 different genres, and samples 3 and 4 (Ns = 747 and 320) indicated their preferences for music from only a single genre (rock or jazz). Results across samples showed that empathy levels are linked to preferences even within genres and account for significant proportions of variance in preferences over and above personality traits for various music-preference dimensions. Study 2 (N = 353) replicated and extended these findings by investigating how musical preferences are differentiated by E-S cognitive styles (i.e., ‘brain types’). Those who are type E (bias towards empathizing) preferred music on the Mellow dimension (R&B/soul, adult contemporary, soft rock genres) compared to type S (bias towards systemizing) who preferred music on the Intense dimension (punk, heavy metal, and hard rock). Analyses of fine-grained psychological and sonic attributes in the music revealed that type E individuals preferred music that featured low arousal (gentle, warm, and sensual attributes), negative valence (depressing and sad), and emotional depth (poetic, relaxing, and thoughtful), while type S preferred music that featured high arousal (strong, tense, and thrilling), and aspects of positive valence (animated) and cerebral depth (complexity). The application of these findings for clinicians, interventions, and those on the autism spectrum (largely type S or extreme type S) are discussed. PMID:26200656
Balteş, Felicia Rodica; Avram, Julia; Miclea, Mircea; Miu, Andrei C
2011-06-01
Operatic music involves both singing and acting (as well as rich audiovisual background arising from the orchestra and elaborate scenery and costumes) that multiply the mechanisms by which emotions are induced in listeners. The present study investigated the effects of music, plot, and acting performance on emotions induced by opera. There were three experimental conditions: (1) participants listened to a musically complex and dramatically coherent excerpt from Tosca; (2) they read a summary of the plot and listened to the same musical excerpt again; and (3) they re-listened to music while they watched the subtitled film of this acting performance. In addition, a control condition was included, in which an independent sample of participants succesively listened three times to the same musical excerpt. We measured subjective changes using both dimensional, and specific music-induced emotion questionnaires. Cardiovascular, electrodermal, and respiratory responses were also recorded, and the participants kept track of their musical chills. Music listening alone elicited positive emotion and autonomic arousal, seen in faster heart rate, but slower respiration rate and reduced skin conductance. Knowing the (sad) plot while listening to the music a second time reduced positive emotions (peacefulness, joyful activation), and increased negative ones (sadness), while high autonomic arousal was maintained. Watching the acting performance increased emotional arousal and changed its valence again (from less positive/sad to transcendent), in the context of continued high autonomic arousal. The repeated exposure to music did not by itself induce this pattern of modifications. These results indicate that the multiple musical and dramatic means involved in operatic performance specifically contribute to the genesis of music-induced emotions and their physiological correlates. Copyright © 2011 Elsevier Inc. All rights reserved.
McCaffrey, Tríona; Edwards, Jane
2016-01-01
Mental health service development internationally is increasingly informed by the collaborative ethos of recovery. Service user evaluation of experiences within music therapy programs allows new phenomena about participation in services to be revealed that might otherwise remain unnoticed. The aim of this study was to demonstrate how asking service users about their experience of music therapy can generate useful information, and to reflect upon the feedback elicited from such processes in order to gain a deeper understanding of how music therapy is received among service users in mental health. Six mental health service users described their experiences of music therapy in one or two individual interviews. Transcripts of interviews were analyzed using the procedures and techniques of Interpretative Phenomenological Analysis. Interviews with mental health service users provided rich, in-depth accounts reflecting the complex nature of music therapy participation. Super-ordinate themes refer to the context in which music therapy was offered, the rich sound world of music in music therapy, the humanity of music therapy, and the strengths enhancing opportunities experienced by service users. Participants indicated that they each experienced music therapy in unique ways. Opinions about the value of music therapy were revealed through an interview process in which the researcher holds an open attitude, welcoming all narrative contributions respectfully. These findings can remind practitioners of the importance of closely tuning into the perspectives and understandings of those who have valuable expertise to share about their experience of music therapy services in mental health. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Robb, Sheri L.; Burns, Debra S.; Stegenga, Kristin A.; Haut, Paul R.; Monahan, Patrick O.; Meza, Jane; Stump, Timothy E.; Cherven, Brooke O.; Docherty, Sharron L.; Hendricks-Ferguson, Verna L.; Kintner, Eileen K.; Haight, Ann E.; Wall, Donna A.; Haase, Joan E.
2013-01-01
Background To reduce the risk of adjustment problems associated with Hematopoietic Stem Cell Transplant (HSCT) for adolescents/young adults (AYA), we examined efficacy of a therapeutic music video (TMV) intervention delivered during the acute phase of HSCT to: (a) increase protective factors of spiritual perspective, social integration, family environment, courageous coping, and hope-derived meaning; (b) decrease risk factors of illness-related distress and defensive coping; and (c) increase outcomes of self-transcendence and resilience. Methods A multi-site, randomized controlled trial (COG-ANUR0631) conducted at 8 Children’s Oncology Group sites involving 113 AYA aged 11–24 years undergoing myeloablative HSCT. Participants, randomized to the TMV or low-dose control (audiobooks) group, completed 6 sessions over 3 weeks with a board-certified music therapist. Variables were based on Haase’s Resilience in Illness Model. Participants completed measures related to latent variables of illness-related distress, social integration, spiritual perspective, family environment, coping, hope-derived meaning and resilience at baseline (T1), post-intervention (T2), and 100-days post-transplant (T3). Results At T2, the TMV group reported significantly better courageous coping (ES=0.505; P=0.030). At T3, the TMV group reported significantly better social integration (ES=0.543; P=.028) and family environment (ES=0.663; P=0.008), as well as moderate non-significant effect sizes for spiritual perspective (E=0.450; P=0.071) and self-transcendence (ES=0.424; P=0.088). Conclusion The TMV intervention improves positive health outcomes of courageous coping, social integration, and family environment during a high risk cancer treatment. We recommend the TMV be examined in a broader population of AYA with high risk cancers. PMID:24469862
Inter-subject synchronization of brain responses during natural music listening
Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod
2015-01-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016
Know thy sound: perceiving self and others in musical contexts.
Sevdalis, Vassilis; Keller, Peter E
2014-10-01
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory-motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual-motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice. Copyright © 2014 Elsevier B.V. All rights reserved.
CliniProteus: A flexible clinical trials information management system
Mathura, Venkatarajan S; Rangareddy, Mahendiranath; Gupta, Pankaj; Mullan, Michael
2007-01-01
Clinical trials involve multi-site heterogeneous data generation with complex data input-formats and forms. The data should be captured and queried in an integrated fashion to facilitate further analysis. Electronic case-report forms (eCRF) are gaining popularity since it allows capture of clinical information in a rapid manner. We have designed and developed an XML based flexible clinical trials data management framework in .NET environment that can be used for efficient design and deployment of eCRFs to efficiently collate data and analyze information from multi-site clinical trials. The main components of our system include an XML form designer, a Patient registration eForm, reusable eForms, multiple-visit data capture and consolidated reports. A unique id is used for tracking the trial, site of occurrence, the patient and the year of recruitment. Availability http://www.rfdn.org/bioinfo/CTMS/ctms.html. PMID:21670796
Incidental Learning of Melodic Structure of North Indian Music.
Rohrmeier, Martin; Widdess, Richard
2017-07-01
Musical knowledge is largely implicit. It is acquired without awareness of its complex rules, through interaction with a large number of samples during musical enculturation. Whereas several studies explored implicit learning of mostly abstract and less ecologically valid features of Western music, very little work has been done with respect to ecologically valid stimuli as well as non-Western music. The present study investigated implicit learning of modal melodic features in North Indian classical music in a realistic and ecologically valid way. It employed a cross-grammar design, using melodic materials from two modes (rāgas) that use the same scale. Findings indicated that Western participants unfamiliar with Indian music incidentally learned to identify distinctive features of each mode. Confidence ratings suggest that participants' performance was consistently correlated with confidence, indicating that they became aware of whether they were right in their responses; that is, they possessed explicit judgment knowledge. Altogether our findings show incidental learning in a realistic ecologically valid context during only a very short exposure, they provide evidence that incidental learning constitutes a powerful mechanism that plays a fundamental role in musical acquisition. Copyright © 2016 Cognitive Science Society, Inc.
Meng, Bo; Zhu, Shujia; Li, Shijia; Zeng, Qingwen; Mei, Bing
2009-08-28
Music has been proved beneficial to improve learning and memory in many species including human in previous research work. Although some genes have been identified to contribute to the mechanisms, it is believed that the effect of music is manifold, behind which must concern a complex regulation network. To further understand the mechanisms, we exposed the mice to classical music for one month. The subsequent behavioral experiments showed improvement of spatial learning capability and elevation of fear-motivated memory in the mice with music-exposure as compared to the naïve mice. Meanwhile, we applied the microarray to compare the gene expression profiles of the hippocampus and cortex between the mice with music-exposure and the naïve mice. The results showed approximately 454 genes in cortex (200 genes up-regulated and 254 genes down-regulated) and 437 genes in hippocampus (256 genes up-regulated and 181 genes down-regulated) were significantly affected in music-exposing mice, which mainly involved in ion channel activity and/or synaptic transmission, cytoskeleton, development, transcription, hormone activity. Our work may provide some hints for better understanding the effects of music on learning and memory.
Risk ON/Risk OFF: Risk-Taking Varies with Subjectively Preferred and Disliked Music.
Halko, Marja-Liisa; Kaustia, Markku
2015-01-01
In this paper we conduct a within-subjects experiment in which teenagers go over 256 gambles with real money gains and losses. For each risky gamble they choose whether to participate in it, or pass. Prior to this main experiment subjects identify specific songs belonging to their favorite musical genre, as well as songs representing a style they dislike. In the main experiment we vary the music playing in the background, so that each subject hears some of their favorite music, and some disliked music, alternating in blocks of 16 gambles. We find that favorite music increases risk-taking ('risk on'), and disliked music suppresses risk-taking ('risk off'), compared to a baseline of no music. Literature in psychology proposes several mechanisms by which mood affects risk-taking, but none of them fully explain the results in our setting. The results are, however, consistent with the economics notion of preference complementarity, extended to the domain of risk preference. The preference structure implied by our results is more complex than previously thought, yet realistic, and consistent with recent theoretical models. More generally, this mechanism offers a potential explanation to why risk-taking is known to change over time and across contexts.
The complexity of classical music networks
NASA Astrophysics Data System (ADS)
Rolla, Vitor; Kestenberg, Juliano; Velho, Luiz
2018-02-01
Previous works suggest that musical networks often present the scale-free and the small-world properties. From a musician's perspective, the most important aspect missing in those studies was harmony. In addition to that, the previous works made use of outdated statistical methods. Traditionally, least-squares linear regression is utilised to fit a power law to a given data set. However, according to Clauset et al. such a traditional method can produce inaccurate estimates for the power law exponent. In this paper, we present an analysis of musical networks which considers the existence of chords (an essential element of harmony). Here we show that only 52.5% of music in our database presents the scale-free property, while 62.5% of those pieces present the small-world property. Previous works argue that music is highly scale-free; consequently, it sounds appealing and coherent. In contrast, our results show that not all pieces of music present the scale-free and the small-world properties. In summary, this research is focused on the relationship between musical notes (Do, Re, Mi, Fa, Sol, La, Si, and their sharps) and accompaniment in classical music compositions. More information about this research project is available at https://eden.dei.uc.pt/~vitorgr/MS.html.
Experience-induced Malleability in Neural Encoding of Pitch, Timbre, and Timing
Kraus, Nina; Skoe, Erika; Parbery-Clark, Alexandra; Ashley, Richard
2009-01-01
Speech and music are highly complex signals that have many shared acoustic features. Pitch, Timbre, and Timing can be used as overarching perceptual categories for describing these shared properties. The acoustic cues contributing to these percepts also have distinct subcortical representations which can be selectively enhanced or degraded in different populations. Musically trained subjects are found to have enhanced subcortical representations of pitch, timbre, and timing. The effects of musical experience on subcortical auditory processing are pervasive and extend beyond music to the domains of language and emotion. The sensory malleability of the neural encoding of pitch, timbre, and timing can be affected by lifelong experience and short-term training. This conceptual framework and supporting data can be applied to consider sensory learning of speech and music through a hearing aid or cochlear implant. PMID:19673837
Johnson, Julene K; Chow, Maggie L
2016-01-01
Music is a complex acoustic signal that relies on a number of different brain and cognitive processes to create the sensation of hearing. Changes in hearing function are generally not a major focus of concern for persons with a majority of neurodegenerative diseases associated with dementia, such as Alzheimer disease (AD). However, changes in the processing of sounds may be an early, and possibly preclinical, feature of AD and other neurodegenerative diseases. The aim of this chapter is to review the current state of knowledge concerning hearing and music perception in persons who have a dementia as a result of a neurodegenerative disease. The review focuses on both peripheral and central auditory processing in common neurodegenerative diseases, with a particular focus on the processing of music and other non-verbal sounds. The chapter also reviews music interventions used for persons with neurodegenerative diseases. PMID:25726296
Characterizing chaotic melodies in automatic music composition
NASA Astrophysics Data System (ADS)
Coca, Andrés E.; Tost, Gerard O.; Zhao, Liang
2010-09-01
In this paper, we initially present an algorithm for automatic composition of melodies using chaotic dynamical systems. Afterward, we characterize chaotic music in a comprehensive way as comprising three perspectives: musical discrimination, dynamical influence on musical features, and musical perception. With respect to the first perspective, the coherence between generated chaotic melodies (continuous as well as discrete chaotic melodies) and a set of classical reference melodies is characterized by statistical descriptors and melodic measures. The significant differences among the three types of melodies are determined by discriminant analysis. Regarding the second perspective, the influence of dynamical features of chaotic attractors, e.g., Lyapunov exponent, Hurst coefficient, and correlation dimension, on melodic features is determined by canonical correlation analysis. The last perspective is related to perception of originality, complexity, and degree of melodiousness (Euler's gradus suavitatis) of chaotic and classical melodies by nonparametric statistical tests.
ERIC Educational Resources Information Center
Hannon, Erin E.; Soley, Gaye; Ullal, Sangeeta
2012-01-01
Despite the ubiquity of dancing and synchronized movement to music, relatively few studies have examined cognitive representations of musical rhythm and meter among listeners from contrasting cultures. We aimed to disentangle the contributions of culture-general and culture-specific influences by examining American and Turkish listeners' detection…
Ethics or Choosing Complexity in Music Relations
ERIC Educational Resources Information Center
Schmidt, Patrick
2012-01-01
The hardship and pleasure of a life in ethics, as in music, springs not from a commitment to the veneration of stability, refinement and consistency, as some political and aesthetic discourses often suggest. Rather, the productive tensions of ethical living arise from a restless interaction between constant motion and adaptability; both marks of…
Values, Music and Education in China
ERIC Educational Resources Information Center
Ho, Wai-Chung; Law, Wing-Wah
2004-01-01
This article examines the complexity of the education of values in the People's Republic of China (PRC) since the beginning of the Cultural Revolution (1966-1976). It attempts to provide an insight into how the central state has managed the values of music education with respect to the dynamic changes to its political ideology across these four…
ERIC Educational Resources Information Center
Thompson, Douglas E.
2013-01-01
In today's complex music software packages, many features can remain unexplored and unused. Software plug-ins--available in most every music software package, yet easily overlooked in the software's basic operations--are one such feature. In this article, I introduce readers to plug-ins and offer tips for purchasing plug-ins I have…
ERIC Educational Resources Information Center
McPhail, Graham J.
2016-01-01
In 2002 Parlo Singh outlined Bernstein's theory of the pedagogic device, elaborating the potential in Bernstein's complex theoretical framework for empirical research. In particular, Singh suggests that Bernstein's concepts provide the means of making explicit the macro and micro structuring of knowledge into pedagogic communication. More…
Long-Term Musical Group Interaction Has a Positive Influence on Empathy in Children
ERIC Educational Resources Information Center
Rabinowitch, Tal-Chen; Cross, Ian; Burnard, Pamela
2013-01-01
Musical group interaction (MGI) is a complex social setting requiring certain cognitive skills that may also elicit shared psychological states. We argue that many MGI-specific features may also be important for emotional empathy, the ability to experience another person's emotional state. We thus hypothesized that long-term repeated participation…
Wavelets in music analysis and synthesis: timbre analysis and perspectives
NASA Astrophysics Data System (ADS)
Alves Faria, Regis R.; Ruschioni, Ruggero A.; Zuffo, Joao A.
1996-10-01
Music is a vital element in the process of comprehending the world where we live and interact with. Frequency it exerts a subtle but expressive influence over a society's evolution line. Analysis and synthesis of music and musical instruments has always been associated with forefront technologies available at each period of human history, and there is no surprise in witnessing now the use of digital technologies and sophisticated mathematical tools supporting its development. Fourier techniques have been employed for years as a tool to analyze timbres' spectral characteristics, and re-synthesize them from these extracted parameters. Recently many modern implementations, based on spectral modeling techniques, have been leading to the development of new generations of music synthesizers, capable of reproducing natural sounds with high fidelity, and producing novel timbres as well. Wavelets are a promising tool on the development of new generations of music synthesizers, counting on its advantages over the Fourier techniques in representing non-periodic and transient signals, with complex fine textures, as found in music. In this paper we propose and introduce the use of wavelets addressing its perspectives towards musical applications. The central idea is to investigate the capacities of wavelets in analyzing, extracting features and altering fine timbre components in a multiresolution time- scale, so as to produce high quality synthesized musical sounds.
Lin, Yuan-Pin; Yang, Yi-Hsuan; Jung, Tzyy-Ping
2014-01-01
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61-67% in valence classification and from around 58-67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
Lin, Yuan-Pin; Yang, Yi-Hsuan; Jung, Tzyy-Ping
2014-01-01
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling. PMID:24822035
NASA Astrophysics Data System (ADS)
Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming
2016-12-01
A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.
Trends in musical theatre voice: an analysis of audition requirements for singers.
Green, Kathryn; Freeman, Warren; Edwards, Matthew; Meyer, David
2014-05-01
The American musical theatre industry is a multibillion dollar business in which the requirements for singers are varied and complex. This study identifies the musical genres and voice requirements that are currently most requested at professional auditions to help voice teachers, pedagogues, and physicians who work with musical theatre singers understand the demands of their clients' business. Frequency count. One thousand two thirty-eight professional musical theatre audition listings were gathered over a 6-month period, and information from each listing was categorized and entered into a spreadsheet for analysis. The results indicate that four main genres of music were requested over a wide variety of styles, with more than half of auditions requesting genre categories that may not be served by traditional or classical voice technique alone. To adequately prepare young musical theatre performers for the current job market and keep the performers healthily making the sounds required by the industry, new singing styles may need to be studied and integrated into voice training that only teaches classical styles. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Lejaren A. Hiller, Jr.: A Memorial Tribute to a Chemist-Composer
NASA Astrophysics Data System (ADS)
Wamser, Christian A.; Wamser, Carl C.
1996-07-01
Lejaren Hiller (1924-1994) was trained in chemistry but maintained a lifelong love of music. Like Alexander Borodin, the Russian chemist-composer, but eventually dedicated his career solely to music. His early work on the chemistry of polymers with Fred Wall at the University of Illinois introduced him to the Illiac computer, with which he did Monte Carlo calculations of polymer conformations. He promptly collaborated with Leonard Isaacson, a graduate student also associated with the Wall group, to teach the Illiac to compose music. Using a modified Monte Carlo technique to select the notes and other aspects of the music, they applied increasingly complex rules to define what constituted acceptable music. The result was their String Quartet #4, produced in 1957, often called the Illiac Suite. It is generally acknowledged as the first piece of music composed by a computer. Hiller remained a pioneer in the field of copmuter composition during his distinguished career at the University of Illinois and the State University of New York at Buffalo. This paper traces Hiller's careers in chemistry and music and examines the connections between the two.
Grewe, Oliver; Nagel, Frederik; Kopiez, Reinhard; Altenmüller, Eckart
2007-11-01
Most people are able to identify basic emotions expressed in music and experience affective reactions to music. But does music generally induce emotion? Does it elicit subjective feelings, physiological arousal, and motor reactions reliably in different individuals? In this interdisciplinary study, measurement of skin conductance, facial muscle activity, and self-monitoring were synchronized with musical stimuli. A group of 38 participants listened to classical, rock, and pop music and reported their feelings in a two-dimensional emotion space during listening. The first entrance of a solo voice or choir and the beginning of new sections were found to elicit interindividual changes in subjective feelings and physiological arousal. Quincy Jones' "Bossa Nova" motivated movement and laughing in more than half of the participants. Bodily reactions such as "goose bumps" and "shivers" could be stimulated by the "Tuba Mirum" from Mozart's Requiem in 7 of 38 participants. In addition, the authors repeated the experiment seven times with one participant to examine intraindividual stability of effects. This exploratory combination of approaches throws a new light on the astonishing complexity of affective music listening.
Musical aptitude is associated with AVPR1A-haplotypes.
Ukkola, Liisa T; Onkamo, Päivi; Raijas, Pirre; Karma, Kai; Järvelä, Irma
2009-05-20
Artistic creativity forms the basis of music culture and music industry. Composing, improvising and arranging music are complex creative functions of the human brain, which biological value remains unknown. We hypothesized that practicing music is social communication that needs musical aptitude and even creativity in music. In order to understand the neurobiological basis of music in human evolution and communication we analyzed polymorphisms of the arginine vasopressin receptor 1A (AVPR1A), serotonin transporter (SLC6A4), catecol-O-methyltranferase (COMT), dopamin receptor D2 (DRD2) and tyrosine hydroxylase 1 (TPH1), genes associated with social bonding and cognitive functions in 19 Finnish families (n = 343 members) with professional musicians and/or active amateurs. All family members were tested for musical aptitude using the auditory structuring ability test (Karma Music test; KMT) and Carl Seashores tests for pitch (SP) and for time (ST). Data on creativity in music (composing, improvising and/or arranging music) was surveyed using a web-based questionnaire. Here we show for the first time that creative functions in music have a strong genetic component (h(2) = .84; composing h(2) = .40; arranging h(2) = .46; improvising h(2) = .62) in Finnish multigenerational families. We also show that high music test scores are significantly associated with creative functions in music (p<.0001). We discovered an overall haplotype association with AVPR1A gene (markers RS1 and RS3) and KMT (p = 0.0008; corrected p = 0.00002), SP (p = 0.0261; corrected p = 0.0072) and combined music test scores (COMB) (p = 0.0056; corrected p = 0.0006). AVPR1A haplotype AVR+RS1 further suggested a positive association with ST (p = 0.0038; corrected p = 0.00184) and COMB (p = 0.0083; corrected p = 0.0040) using haplotype-based association test HBAT. The results suggest that the neurobiology of music perception and production is likely to be related to the pathways affecting intrinsic attachment behavior.
Ukkola-Vuoti, Liisa; Kanduri, Chakravarthi; Oikkonen, Jaana; Buck, Gemma; Blancher, Christine; Raijas, Pirre; Karma, Kai; Lähdesmäki, Harri; Järvelä, Irma
2013-01-01
Music perception and practice represent complex cognitive functions of the human brain. Recently, evidence for the molecular genetic background of music related phenotypes has been obtained. In order to further elucidate the molecular background of musical phenotypes we analyzed genome wide copy number variations (CNVs) in five extended pedigrees and in 172 unrelated subjects characterized for musical aptitude and creative functions in music. Musical aptitude was defined by combination of the scores of three music tests (COMB scores): auditory structuring ability, Seashores test for pitch and for time. Data on creativity in music (herein composing, improvising and/or arranging music) was surveyed using a web-based questionnaire.Several CNVRs containing genes that affect neurodevelopment, learning and memory were detected. A deletion at 5q31.1 covering the protocadherin-α gene cluster (Pcdha 1-9) was found co-segregating with low music test scores (COMB) in both sample sets. Pcdha is involved in neural migration, differentiation and synaptogenesis. Creativity in music was found to co-segregate with a duplication covering glucose mutarotase gene (GALM) at 2p22. GALM has influence on serotonin release and membrane trafficking of the human serotonin transporter. Interestingly, genes related to serotonergic systems have been shown to associate not only with psychiatric disorders but also with creativity and music perception. Both, Pcdha and GALM, are related to the serotonergic systems influencing cognitive and motor functions, important for music perception and practice. Finally, a 1.3 Mb duplication was identified in a subject with low COMB scores in the region previously linked with absolute pitch (AP) at 8q24. No differences in the CNV burden was detected among the high/low music test scores or creative/non-creative groups. In summary, CNVs and genes found in this study are related to cognitive functions. Our result suggests new candidate genes for music perception related traits and supports the previous results from AP study.
Oikkonen, Jaana; Buck, Gemma; Blancher, Christine; Raijas, Pirre; Karma, Kai; Lähdesmäki, Harri; Järvelä, Irma
2013-01-01
Music perception and practice represent complex cognitive functions of the human brain. Recently, evidence for the molecular genetic background of music related phenotypes has been obtained. In order to further elucidate the molecular background of musical phenotypes we analyzed genome wide copy number variations (CNVs) in five extended pedigrees and in 172 unrelated subjects characterized for musical aptitude and creative functions in music. Musical aptitude was defined by combination of the scores of three music tests (COMB scores): auditory structuring ability, Seashores test for pitch and for time. Data on creativity in music (herein composing, improvising and/or arranging music) was surveyed using a web-based questionnaire. Several CNVRs containing genes that affect neurodevelopment, learning and memory were detected. A deletion at 5q31.1 covering the protocadherin-α gene cluster (Pcdha 1-9) was found co-segregating with low music test scores (COMB) in both sample sets. Pcdha is involved in neural migration, differentiation and synaptogenesis. Creativity in music was found to co-segregate with a duplication covering glucose mutarotase gene (GALM) at 2p22. GALM has influence on serotonin release and membrane trafficking of the human serotonin transporter. Interestingly, genes related to serotonergic systems have been shown to associate not only with psychiatric disorders but also with creativity and music perception. Both, Pcdha and GALM, are related to the serotonergic systems influencing cognitive and motor functions, important for music perception and practice. Finally, a 1.3 Mb duplication was identified in a subject with low COMB scores in the region previously linked with absolute pitch (AP) at 8q24. No differences in the CNV burden was detected among the high/low music test scores or creative/non-creative groups. In summary, CNVs and genes found in this study are related to cognitive functions. Our result suggests new candidate genes for music perception related traits and supports the previous results from AP study. PMID:23460800
Dynamic musical communication of core affect
Flaig, Nicole K.; Large, Edward W.
2013-01-01
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified “scene” that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience. PMID:24672492
Music and its association with epileptic disorders.
Maguire, Melissa
2015-01-01
The association between music and epileptic seizures is complex and intriguing. Musical processing within the human brain recruits a network which involves many cortical areas that could activate as part of a temporal lobe seizure or become hyperexcitable on musical exposure as in the case of musicogenic epilepsy. The dichotomous effect of music on seizures may be explained by modification of dopaminergic circuitry or counteractive cognitive and sensory input in ictogenesis. Research has explored the utility of music as a therapy in epilepsy and while limited studies show some evidence of an effect on seizure activity; further work is required to ascertain its clinical potential. Sodium channel-blocking antiepileptic drugs, e.g., carbamazepine and oxcarbazepine, appear to effect pitch perception particularly in native-born Japanese, a rare but important adverse effect, particularly if a professional musician. Temporal lobe surgery for right lateralizing epilepsy has the capacity to effect all facets of musical processing, although risk and correlation to resection area need further research. There is a need for the development of investigative tools of musical processing that could be utilized along the surgical pathway. Similarly, work is also required in devising a musical paradigm as part of electroencephalography to improve surveillance of musicogenic seizures. These clinical applications could aid the management of epilepsy and preservation of musical ability. © 2015 Elsevier B.V. All rights reserved.
Dynamic musical communication of core affect.
Flaig, Nicole K; Large, Edward W
2014-01-01
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified "scene" that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.
An Emerging Theoretical Model of Music Therapy Student Development.
Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E
2017-07-01
Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
On multi-site damage identification using single-site training data
NASA Astrophysics Data System (ADS)
Barthorpe, R. J.; Manson, G.; Worden, K.
2017-11-01
This paper proposes a methodology for developing multi-site damage location systems for engineering structures that can be trained using single-site damaged state data only. The methodology involves training a sequence of binary classifiers based upon single-site damage data and combining the developed classifiers into a robust multi-class damage locator. In this way, the multi-site damage identification problem may be decomposed into a sequence of binary decisions. In this paper Support Vector Classifiers are adopted as the means of making these binary decisions. The proposed methodology represents an advancement on the state of the art in the field of multi-site damage identification which require either: (1) full damaged state data from single- and multi-site damage cases or (2) the development of a physics-based model to make multi-site model predictions. The potential benefit of the proposed methodology is that a significantly reduced number of recorded damage states may be required in order to train a multi-site damage locator without recourse to physics-based model predictions. In this paper it is first demonstrated that Support Vector Classification represents an appropriate approach to the multi-site damage location problem, with methods for combining binary classifiers discussed. Next, the proposed methodology is demonstrated and evaluated through application to a real engineering structure - a Piper Tomahawk trainer aircraft wing - with its performance compared to classifiers trained using the full damaged-state dataset.
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
Accuracy of cochlear implant recipients in speech reception in the presence of background music.
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-12-01
This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.
Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S
2011-01-01
Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p < 0.001) time to navigate the maze over the three trials thereby showing an improvement with training. In both sound-stimulated groups, the total time taken to reach the target decreased significantly (p < 0.01) in comparison to the unstimulated control group, indicating the facilitation of spatial learning. However, this decline was more at 24 h than at later posthatch ages. When tested for memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p < 0.001) time to traverse the maze, suggesting a temporary impairment in their retention of the learnt task. In both sound-stimulated groups at 24 h, the plasma corticosterone levels were significantly decreased (p < 0.001) and increased thereafter at 72 h (p < 0.001) and 120 h which may contribute to the differential response in spatial learning. Thus, prenatal auditory stimulation with either species-specific or complex rhythmic music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.
Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach
NASA Astrophysics Data System (ADS)
Bugatti, Alessandro; Flammini, Alessandra; Migliorati, Pierangelo
2002-12-01
We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.
Investigation of musicality in birdsong
Rothenberg, David; Roeske, Tina C.; Voss, Henning U.; Naguib, Marc; Tchernichovski, Ofer
2013-01-01
Songbirds spend much of their time learning, producing, and listening to complex vocal sequences we call songs. Songs are learned via cultural transmission, and singing, usually by males, has a strong impact on the behavioral state of the listeners, often promoting affiliation, pair bonding, or aggression. What is it in the acoustic structure of birdsong that makes it such a potent stimulus? We suggest that birdsong potency might be driven by principles similar to those that make music so effective in inducing emotional responses in humans: a combination of rhythms and pitches —and the transitions between acoustic states—affecting emotions through creating expectations, anticipations, tension, tension release, or surprise. Here we propose a framework for investigating how birdsong, like human music, employs the above “musical” features to affect the emotions of avian listeners. First we analyze songs of thrush nightingales (Luscinia luscinia) by examining their trajectories in terms of transitions in rhythm and pitch. These transitions show gradual escalations and graceful modifications, which are comparable to some aspects of human musicality. We then explore the feasibility of stripping such putative musical features from the songs and testing how this might affect patterns of auditory responses, focusing on fMRI data in songbirds that demonstrate the feasibility of such approaches. Finally, we explore ideas for investigating whether musical features of birdsong activate avian brains and affect avian behavior in manners comparable to music’s effects on humans. In conclusion, we suggest that birdsong research would benefit from current advances in music theory by attempting to identify structures that are designed to elicit listeners’ emotions and then testing for such effects experimentally. Birdsong research that takes into account the striking complexity of song structure in light of its more immediate function – to affect behavioral state in listeners – could provide a useful animal model for studying basic principles of music neuroscience in a system that is very accessible for investigation, and where developmental auditory and social experience can be tightly controlled. PMID:24036130
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce
2012-01-01
Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).
Individual Differences in Beat Perception Affect Gait Responses to Low- and High-Groove Music
Leow, Li-Ann; Parrott, Taylor; Grahn, Jessica A.
2014-01-01
Slowed gait in patients with Parkinson’s disease (PD) can be improved when patients synchronize footsteps to isochronous metronome cues, but limited retention of such improvements suggest that permanent cueing regimes are needed for long-term improvements. If so, music might make permanent cueing regimes more pleasant, improving adherence; however, music cueing requires patients to synchronize movements to the “beat,” which might be difficult for patients with PD who tend to show weak beat perception. One solution may be to use high-groove music, which has high beat salience that may facilitate synchronization, and affective properties, which may improve motivation to move. As a first step to understanding how beat perception affects gait in complex neurological disorders, we examined how beat perception ability affected gait in neurotypical adults. Synchronization performance and gait parameters were assessed as healthy young adults with strong or weak beat perception synchronized to low-groove music, high-groove music, and metronome cues. High-groove music was predicted to elicit better synchronization than low-groove music, due to its higher beat salience. Two musical tempi, or rates, were used: (1) preferred tempo: beat rate matched to preferred step rate and (2) faster tempo: beat rate adjusted to 22.5% faster than preferred step rate. For both strong and weak beat-perceivers, synchronization performance was best with metronome cues, followed by high-groove music, and worst with low-groove music. In addition, high-groove music elicited longer and faster steps than low-groove music, both at preferred tempo and at faster tempo. Low-groove music was particularly detrimental to gait in weak beat-perceivers, who showed slower and shorter steps compared to uncued walking. The findings show that individual differences in beat perception affect gait when synchronizing footsteps to music, and have implications for using music in gait rehabilitation. PMID:25374521
The razor's edge: Australian rock music impairs men's performance when pretending to be a surgeon.
Fancourt, Daisy; Burton, Thomas Mw; Williamon, Aaron
2016-12-12
Over the past few decades there has been interest in the role of music in the operating theatre. However, despite many reported benefits, a number of potentially harmful effects of music have been identified. This study aimed to explore the effects of rock and classical music on surgical speed, accuracy and perceived distraction when performing multiorgan resection in the board game Operation. Single-blind, three-arm, randomised controlled trial. Imperial Festival, London, May 2016. Members of the public (n = 352) aged ≥ 16 years with no previous formal surgical training or hearing impairments. Participants were randomised to listen through noise-cancelling headphones to either the sound of an operating theatre, rock music or classical music. Participants were then invited to remove three organs from the board game patient, Cavity Sam, using surgical tweezers. Time taken (seconds) to remove three organs from Cavity Sam; the number of mistakes made in performing the surgery; and perceived distraction, rated on a five-point Likert-type scale from 1 (not at all distracting) to 5 (very distracting). Rock music impairs the performance of men but not women when undertaking complex surgical procedures in the board game Operation, increasing the time taken to operate and showing a trend towards more surgical mistakes. In addition, classical music was associated with lower perceived distraction during the game, but this effect was attenuated when factoring in how much people liked the music, with suggestions that only people who particularly liked the music of Mozart found it beneficial. Rock music (specifically Australian rock music) appears to have detrimental effects on surgical performance. Men are advised not to listen to rock music when either operating or playing board games.
Music and epilepsy: a critical review.
Maguire, Melissa Jane
2012-06-01
The effect of music on patients with epileptic seizures is complex and at present poorly understood. Clinical studies suggest that the processing of music within the human brain involves numerous cortical areas, extending beyond Heschl's gyrus and working within connected networks. These networks could be recruited during a seizure manifesting as musical phenomena. Similarly, if certain areas within the network are hyperexcitable, then there is a potential that particular sounds or certain music could act as epileptogenic triggers. This occurs in the case of musicogenic epilepsy, whereby seizures are triggered by music. Although it appears that this condition is rare, the exact prevalence is unknown, as often patients do not implicate music as an epileptogenic trigger and routine electroencephalography does not use sound in seizure provocation. Music therapy for refractory epilepsy remains controversial, and further research is needed to explore the potential anticonvulsant role of music. Dopaminergic system modulation and the ambivalent action of cognitive and sensory input in ictogenesis may provide possible theories for the dichotomous proconvulsant and anticonvulsant role of music in epilepsy. The effect of antiepileptic drugs and surgery on musicality should not be underestimated. Altered pitch perception in relation to carbamazepine is rare, but health care professionals should discuss this risk or consider alternative medication particularly if the patient is a professional musician or native-born Japanese. Studies observing the effect of epilepsy surgery on musicality suggest a risk with right temporal lobectomy, although the extent of this risk and correlation to size and area of resection need further delineation. This potential risk may bring into question whether tests on musical perception and memory should form part of the preoperative neuropsychological workup for patients embarking on surgery, particularly that of the right temporal lobe. Wiley Periodicals, Inc. © 2012 International League Against Epilepsy.
Inter-subject synchronization of brain responses during natural music listening.
Abrams, Daniel A; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J; Menon, Vinod
2013-05-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic 'real-world' music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Latin Holidays: Mexican Americans, Latin Music, and Cultural Identity in Postwar Los Angeles
ERIC Educational Resources Information Center
Macias, Anthony
2005-01-01
This essay recreates the exciting Latin music and dance scenes of post-World War II Southern California, showing how Mexican Americans produced and consumed a range of styles and, in the process, articulated their complex cultural sensibilities. By participating in a Spanish-language expressive culture that was sophisticated and cosmopolitan,…
ERIC Educational Resources Information Center
Gonzalez-Moreno, Patricia Adelaida
2012-01-01
Despite the increasing number of students in music education graduate programmes, attrition rates suggest a lack of success in retaining and assisting them to the completion of their degree. Based on the expectancy-value theory, the aim of this study was to examine students' motivations (values and competence beliefs) and their complex interaction…
Using Popular Music to Teach the Geography of the United States and Canada
ERIC Educational Resources Information Center
Smiley, Sarah L.; Post, Chris W.
2014-01-01
The introductory level course Geography of the U.S. and Canada requires students to grasp large amounts of complex material, oftentimes using a lecture-based pedagogical approach. This article outlines two ways that popular music can be successfully used in the geography classroom. First, songs are used to review key concepts and characteristics…
ERIC Educational Resources Information Center
Werner, Christian; Linke, Sandra K.
2013-01-01
The authors of this article believe intergenerational projects provide appropriate solutions to this complex phenomenon. With this in our mind, we present the story behind a unique German programi involving gifted young people which is designed to bring different age groups together in order to perform and share experiences through music. The…
Bringing Curriculum to Life. Enacting Project-Based Learning in Music Programs
ERIC Educational Resources Information Center
Tobias, Evan S.; Campbell, Mark Robin; Greco, Phillip
2015-01-01
At its core, project-based learning is based on the idea that real-life problems capture student interest, provoke critical thinking, and develop skills as they engage in and complete complex undertakings that typically result in a realistic product, event, or presentation to an audience. This article offers a starting point for music teachers who…
ERIC Educational Resources Information Center
Teague, Adele; Smith, Gareth Dylan
2015-01-01
Musicians are acknowledged to lead complex working lives, often characterised as portfolio careers. The higher music education research literature has tended to focus on preparing students for rich working lives and multiple identity realisations across potential roles. Extant literature does not address the area of work-life balance, which this…
Music in a Flat World: Thomas L. Friedman's Ideas and Your Program
ERIC Educational Resources Information Center
Beckmann-Collier, Aimee
2009-01-01
In his bestseller "The World is Flat," Pulitzer-winning author Thomas L. Friedman discusses the concept of globalization and its "flattening" effect on the world. Globalization is a hugely controversial and complex issue, and the effects of globalization and the new needs of a global society may be especially important to music educators. By…
"Sounds of Intent", Phase 2: Gauging the Music Development of Children with Complex Needs
ERIC Educational Resources Information Center
Ockelford, A.; Welch, G.; Jewell-Gore, L.; Cheng, E.; Vogiatzoglou, A.; Himonides, E.
2011-01-01
This article reports the latest phase of research in the "Sounds of intent" project, which is seeking, as a long-term goal, to map musical development in children and young people with severe, or profound and multiple learning difficulties (SLD or PMLD). Previous exploratory work had resulted in a framework of six putative…
Data sonification and sound visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wiebel, E.
1999-07-01
Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.
A Generalized Mechanism for Perception of Pitch Patterns
Loui, Psyche; Wu, Elaine H.; Wessel, David L.; Knight, Robert T.
2009-01-01
Surviving in a complex and changeable environment relies upon the ability to extract probable recurring patterns. Here we report a neurophysiological mechanism for rapid probabilistic learning of a new system of music. Participants listened to different combinations of tones from a previously-unheard system of pitches based on the Bohlen-Pierce scale, with chord progressions that form 3:1 ratios in frequency, notably different from 2:1 frequency ratios in existing musical systems. Event-related brain potentials elicited by improbable sounds in the new music system showed emergence over a one-hour period of physiological signatures known to index sound expectation in standard Western music. These indices of expectation learning were eliminated when sound patterns were played equiprobably, and co-varied with individual behavioral differences in learning. These results demonstrate that humans utilize a generalized probability-based perceptual learning mechanism to process novel sound patterns in music. PMID:19144845
The nature and perception of fluctuations in human musical rhythms
NASA Astrophysics Data System (ADS)
Hennig, Holger; Fleischmann, Ragnar; Fredebohm, Anneke; Hagmayer, York; Nagler, Jan; Witt, Annette; Theis, Fabian; Geisel, Theo
2012-02-01
Although human musical performances represent one of the most valuable achievements of mankind, the best musicians perform imperfectly. Musical rhythms are not entirely accurate and thus inevitably deviate from the ideal beat pattern. Nevertheless, computer generated perfect beat patterns are frequently devalued by listeners due to a perceived lack of human touch. Professional audio editing software therefore offers a humanizing feature which artificially generates rhythmic fluctuations. However, the built-in humanizing units are essentially random number generators producing only simple uncorrelated fluctuations. Here, for the first time, we establish long-range fluctuations as an inevitable natural companion of both simple and complex human rhythmic performances [1]. Moreover, we demonstrate that listeners strongly prefer long-range correlated fluctuations in musical rhythms. Thus, the favorable fluctuation type for humanizing interbeat intervals coincides with the one generically inherent in human musical performances. [1] HH et al., PLoS ONE,6,e26457 (2011)
Brain, music, and non-Poisson renewal processes
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo
2007-06-01
In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.
Music and emotion-a composer's perspective.
Douek, Joel
2013-01-01
This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavors to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives.
Music and emotion—a composer's perspective
Douek, Joel
2013-01-01
This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavors to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives. PMID:24348344
Ventegodt, Søren; Hermansen, Tyge Dahl; Kandel, Isack; Merrick, Joav
2008-07-13
The functioning brain behaves like one highly-structured, coherent, informational field. It can be popularly described as a "coherent ball of energy", making the idea of a local highly-structured quantum field that carries the consciousness very appealing. If that is so, the structure of the experience of music might be a quite unique window into a hidden quantum reality of the brain, and even of life itself. The structure of music is then a mirror of a much more complex, but similar, structure of the energetic field of the working brain. This paper discusses how the perception of music is organized in the human brain with respect to the known tone scales of major and minor. The patterns used by the brain seem to be similar to the overtones of vibrating matter, giving a positive experience of harmonies in major. However, we also like the minor scale, which can explain brain patterns as fractal-like, giving a symmetric "downward reflection" of the major scale into the minor scale. We analyze the implication of beautiful and ugly tones and harmonies for the model. We conclude that when it comes to simple perception of harmonies, the most simple is the most beautiful and the most complex is the most ugly, but in music, even the most disharmonic harmony can be beautiful, if experienced as a part of a dynamic release of musical tension. This can be taken as a general metaphor of painful, yet meaningful, and developing experiences in human life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Huaying, E-mail: zhaoh3@mail.nih.gov; Schuck, Peter, E-mail: zhaoh3@mail.nih.gov
2015-01-01
Global multi-method analysis for protein interactions (GMMA) can increase the precision and complexity of binding studies for the determination of the stoichiometry, affinity and cooperativity of multi-site interactions. The principles and recent developments of biophysical solution methods implemented for GMMA in the software SEDPHAT are reviewed, their complementarity in GMMA is described and a new GMMA simulation tool set in SEDPHAT is presented. Reversible macromolecular interactions are ubiquitous in signal transduction pathways, often forming dynamic multi-protein complexes with three or more components. Multivalent binding and cooperativity in these complexes are often key motifs of their biological mechanisms. Traditional solution biophysicalmore » techniques for characterizing the binding and cooperativity are very limited in the number of states that can be resolved. A global multi-method analysis (GMMA) approach has recently been introduced that can leverage the strengths and the different observables of different techniques to improve the accuracy of the resulting binding parameters and to facilitate the study of multi-component systems and multi-site interactions. Here, GMMA is described in the software SEDPHAT for the analysis of data from isothermal titration calorimetry, surface plasmon resonance or other biosensing, analytical ultracentrifugation, fluorescence anisotropy and various other spectroscopic and thermodynamic techniques. The basic principles of these techniques are reviewed and recent advances in view of their particular strengths in the context of GMMA are described. Furthermore, a new feature in SEDPHAT is introduced for the simulation of multi-method data. In combination with specific statistical tools for GMMA in SEDPHAT, simulations can be a valuable step in the experimental design.« less
Happy creativity: Listening to happy music facilitates divergent thinking.
Ritter, Simone M; Ferguson, Sam
2017-01-01
Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.
Background music as a risk factor for distraction among young-novice drivers.
Brodsky, Warren; Slor, Zack
2013-10-01
There are countless beliefs about the power of music during driving. The last thing one would think about is: how safe is it to listen or sing to music? Unfortunately, collisions linked to music devices have been known for some time; adjusting the radio controls, swapping tape-cassettes and compact-discs, or searching through MP3 files, are all forms of distraction that can result in a near-crash or crash. While the decrement of vehicular performance can also occur from capacity interference to central attention, whether or not music listening is a contributing factor to distraction is relatively unknown. The current study explored the effects of driver-preferred music on driver behavior. 85 young-novice drivers completed six trips in an instrumented Learners Vehicle. The study found that all participants committed at-least 3 driver deficiencies; 27 needed a verbal warning/command and 17 required a steering or braking intervention to prevent an accident. While there were elevated positive moods and enjoyment for trips with driver-preferred music, this background also produced the most frequent severe driver miscalculations and inaccuracies, violations, and aggressive driving. However, trips with music structurally designed to generate moderate levels of perceptual complexity, improved driver behavior and increased driver safety. The study is the first within-subjects on-road high-dose double-exposure clinical-trial investigation of musical stimuli on driver behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.
Coutinho, Eduardo; Cangelosi, Angelo
2011-08-01
We sustain that the structure of affect elicited by music is largely dependent on dynamic temporal patterns in low-level music structural parameters. In support of this claim, we have previously provided evidence that spatiotemporal dynamics in psychoacoustic features resonate with two psychological dimensions of affect underlying judgments of subjective feelings: arousal and valence. In this article we extend our previous investigations in two aspects. First, we focus on the emotions experienced rather than perceived while listening to music. Second, we evaluate the extent to which peripheral feedback in music can account for the predicted emotional responses, that is, the role of physiological arousal in determining the intensity and valence of musical emotions. Akin to our previous findings, we will show that a significant part of the listeners' reported emotions can be predicted from a set of six psychoacoustic features--loudness, pitch level, pitch contour, tempo, texture, and sharpness. Furthermore, the accuracy of those predictions is improved with the inclusion of physiological cues--skin conductance and heart rate. The interdisciplinary work presented here provides a new methodology to the field of music and emotion research based on the combination of computational and experimental work, which aid the analysis of the emotional responses to music, while offering a platform for the abstract representation of those complex relationships. Future developments may aid specific areas, such as, psychology and music therapy, by providing coherent descriptions of the emotional effects of specific music stimuli. 2011 APA, all rights reserved
The role of the medial temporal limbic system in processing emotions in voice and music.
Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier
2014-12-01
Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-01-01
Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550
Meltzer, Benjamin; Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias
2015-01-01
The brain’s analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, non-sensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces. PMID:26300760
Risk ON / Risk OFF: Risk-Taking Varies with Subjectively Preferred and Disliked Music
Halko, Marja-Liisa; Kaustia, Markku
2015-01-01
In this paper we conduct a within-subjects experiment in which teenagers go over 256 gambles with real money gains and losses. For each risky gamble they choose whether to participate in it, or pass. Prior to this main experiment subjects identify specific songs belonging to their favorite musical genre, as well as songs representing a style they dislike. In the main experiment we vary the music playing in the background, so that each subject hears some of their favorite music, and some disliked music, alternating in blocks of 16 gambles. We find that favorite music increases risk-taking (‘risk on’), and disliked music suppresses risk-taking (‘risk off’), compared to a baseline of no music. Literature in psychology proposes several mechanisms by which mood affects risk-taking, but none of them fully explain the results in our setting. The results are, however, consistent with the economics notion of preference complementarity, extended to the domain of risk preference. The preference structure implied by our results is more complex than previously thought, yet realistic, and consistent with recent theoretical models. More generally, this mechanism offers a potential explanation to why risk-taking is known to change over time and across contexts. PMID:26301776
Wilkins, R W; Hodges, D A; Laurienti, P J; Steen, M; Burdette, J H
2014-08-28
Most people choose to listen to music that they prefer or 'like' such as classical, country or rock. Previous research has focused on how different characteristics of music (i.e., classical versus country) affect the brain. Yet, when listening to preferred music--regardless of the type--people report they often experience personal thoughts and memories. To date, understanding how this occurs in the brain has remained elusive. Using network science methods, we evaluated differences in functional brain connectivity when individuals listened to complete songs. We show that a circuit important for internally-focused thoughts, known as the default mode network, was most connected when listening to preferred music. We also show that listening to a favorite song alters the connectivity between auditory brain areas and the hippocampus, a region responsible for memory and social emotion consolidation. Given that musical preferences are uniquely individualized phenomena and that music can vary in acoustic complexity and the presence or absence of lyrics, the consistency of our results was unexpected. These findings may explain why comparable emotional and mental states can be experienced by people listening to music that differs as widely as Beethoven and Eminem. The neurobiological and neurorehabilitation implications of these results are discussed.
Hutka, Stefanie; Bidelman, Gavin M.; Moreno, Sylvain
2013-01-01
There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain’s processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer. PMID:24454295
Ledger, Alison; McCaffrey, Tríona
2015-01-01
Arts-based research (ABR) has emerged in music therapy in diverse ways, employing a range of interpretive paradigms and artistic media. It is notable that no consensus exists as to when and where the arts are included in the research process, or which music therapy topics are most suited to arts-based study. This diversity may pose challenges for music therapists who are developing, reading, and evaluating arts-based research. This paper provides an updated review of arts-based research literature in music therapy, along with four questions for researchers who are developing arts-based research. These questions are 1) When should the arts be introduced? 2) Which artistic medium is appropriate? 3) How should the art be understood? and 4) What is the role of the audience? We argue that these questions are key to understanding arts-based research, justifying methods, and evaluating claims arising from arts-based research. Rather than defining arts-based research in music therapy, we suggest that arts-based research should be understood as a flexible research strategy appropriate for exploring the complexities of music therapy practice. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain
2013-12-30
There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain's processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.
The Role of Involvement and Use in Multisite Evaluations
ERIC Educational Resources Information Center
Lawrenz, Frances; King, Jean A.; Ooms, Ann
2011-01-01
A cross-case analysis of four National Science Foundation (NSF) case studies identified both unique details and common themes related to promoting the use and influence of multisite evaluations. The analysis provided evidence of diverse evaluation use by stakeholders and suggested that people taking part in the multisite evaluations perceived…
Multi-Sited Global Ethnography and Travel: Gendered Journeys in Three Registers
ERIC Educational Resources Information Center
Epstein, Debbie; Fahey, Johannah; Kenway, Jane
2013-01-01
This paper joins a barely begun conversation about multi-sited and global ethnography in educational research; a conversation that is likely to intensify along with growing interest in the links between education, globalisation, internationalisation and transnationalism. Drawing on an ongoing multi-sited global ethnography of elite schools and…
Neupane, Subas; Nygård, Clas-Håkan; Oakman, Jodi
2016-06-16
Work-related musculoskeletal pain is a major occupational problem. Those with pain in multiple sites usually report worse health outcomes than those with pain in one site. This study explored prevalence and associated predictors of multi-site pain in health care sector employees. Survey responses from 1348 health care sector employees across three organisations (37% response rate) collected data on job satisfaction, work life balance, psychosocial and physical hazards, general health and work ability. Musculoskeletal discomfort was measured across 5 body regions with pain in ≥ 2 sites defined as multi-site pain. Generalized linear models were used to identify relationships between work-related factors and multi-site pain. Over 52% of the employees reported pain in multiple body sites and 19% reported pain in one site. Poor work life balance (PRR = 2.33, 95% CI = 1.06-5.14). physical (PRR = 7.58, 95% CI = 4.89-11.77) and psychosocial (PRR = 1.59, 95% CI = 1.00-2.57) hazard variables were related to multi-site pain (after controlling for age, gender, health and work ability. Older employees and females were more likely to report multi-site pain. Effective risk management of work related multi-site pain must include identification and control of psychosocial and physical hazards.
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce
2011-01-01
Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258
NASA Astrophysics Data System (ADS)
Johnston, Dennis Alan
The purpose of this study was to investigate the ability of trained musicians and musically untrained college students to discriminate music instrument timbre as a function of duration. Specific factors investigated were the thresholds for timbre discrimination as a function of duration, musical ensemble participation as training, and the relative discrimination abilities of vocalists and instrumentalists. The subjects (N = 126) were volunteer college students from intact classes from various disciplines separated into musically untrained college students (N = 43) who had not participated in musical ensembles and trained musicians (N = 83) who had. The musicians were further divided into instrumentalists (N = 51) and vocalists (N = 32). The Method of Constant Stimuli, using a same-different response procedure with 120 randomized, counterbalanced timbre pairs comprised of trumpet, clarinet, or violin, presented in durations of 20 to 100 milliseconds in a sequence of pitches, in two blocks was used for data collection. Complete, complex musical timbres were recorded digitally and presented in a sequence of changing pitches to more closely approximate an actual music listening experience. Under the conditions of this study, it can be concluded that the threshold for timbre discrimination as a function of duration is at or below 20 ms. Even though trained musicians tended to discriminate timbre better than musically untrained college students, musicians cannot discriminate timbre significantly better then those subjects who have not participated in musical ensembles. Additionally, instrumentalists tended to discriminate timbre better than vocalists, but the discrimination is not significantly different. Recommendations for further research include suggestions for a timbre discrimination measurement tool that takes into consideration the multidimensionality of timbre and the relationship of timbre discrimination to timbre source, duration, pitch, and loudness.
Defining the biological bases of individual differences in musicality
Gingras, Bruno; Honing, Henkjan; Peretz, Isabelle; Trainor, Laurel J.; Fisher, Simon E.
2015-01-01
Advances in molecular technologies make it possible to pinpoint genomic factors associated with complex human traits. For cognition and behaviour, identification of underlying genes provides new entry points for deciphering the key neurobiological pathways. In the past decade, the search for genetic correlates of musicality has gained traction. Reports have documented familial clustering for different extremes of ability, including amusia and absolute pitch (AP), with twin studies demonstrating high heritability for some music-related skills, such as pitch perception. Certain chromosomal regions have been linked to AP and musical aptitude, while individual candidate genes have been investigated in relation to aptitude and creativity. Most recently, researchers in this field started performing genome-wide association scans. Thus far, studies have been hampered by relatively small sample sizes and limitations in defining components of musicality, including an emphasis on skills that can only be assessed in trained musicians. With opportunities to administer standardized aptitude tests online, systematic large-scale assessment of musical abilities is now feasible, an important step towards high-powered genome-wide screens. Here, we offer a synthesis of existing literatures and outline concrete suggestions for the development of comprehensive operational tools for the analysis of musical phenotypes. PMID:25646515
Hannon, Erin E; Schachner, Adena; Nave-Blodgett, Jessica E
2017-07-01
Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy. Copyright © 2017 Elsevier Inc. All rights reserved.
The effect of music therapy on mood states in neurological patients: a pilot study.
Magee, Wendy L; Davidson, Jane W
2002-01-01
Music therapy as a clinical intervention has been demonstrated to improve mood states with a variety of populations, however, this has not yet been shown empirically with participants with neurological impairments. This report presents the results of a pilot study examining the effect of music therapy on moods states in patients with acquired and complex neuro-disabilities. Using a single subject design, pre and post session mood states were measured using the Profile of Mood States (Bipolar form). Analyses examined the main effects of pre/post measures as well as interactions between the specific musical therapeutic intervention, mood state, and diagnosis. Results showed that, in terms of composed-anxious, energetic-tired, and agreeable-hostile mood states, there was a significant difference between pre and post music therapy intervention in a positive direction. Although the study displayed that the benefits of music therapy in treating mood states in this patient group are limited, some of the results were affected by the difficulty of the POMS-BI questionnaire for the subject group. The results are discussed considering methodological improvements and arguing for the inclusion of music therapy as an effective intervention to address negative mood states in neuro-rehabilitation populations.
Furnham, Adrian; Strbac, Lisa
2002-02-20
Previous research has found that introverts' performance on complex cognitive tasks is more negatively affected by distracters, e.g. music and background television, than extraverts' performance. This study extended previous research by examining whether background noise would be as distracting as music. In the presence of silence, background garage music and office noise, 38 introverts and 38 extraverts carried out a reading comprehension task, a prose recall task and a mental arithmetic task. It was predicted that there would be an interaction between personality and background sound on all three tasks: introverts would do less well on all of the tasks than extraverts in the presence of music and noise but in silence performance would be the same. A significant interaction was found on the reading comprehension task only, although a trend for this effect was clearly present on the other two tasks. It was also predicted that there would be a main effect for background sound: performance would be worse in the presence of music and noise than silence. Results confirmed this prediction. These findings support the Eysenckian hypothesis of the difference in optimum cortical arousal in introverts and extraverts.
Musical Aptitude Is Associated with AVPR1A-Haplotypes
Ukkola, Liisa T.; Onkamo, Päivi; Raijas, Pirre; Karma, Kai; Järvelä, Irma
2009-01-01
Artistic creativity forms the basis of music culture and music industry. Composing, improvising and arranging music are complex creative functions of the human brain, which biological value remains unknown. We hypothesized that practicing music is social communication that needs musical aptitude and even creativity in music. In order to understand the neurobiological basis of music in human evolution and communication we analyzed polymorphisms of the arginine vasopressin receptor 1A (AVPR1A), serotonin transporter (SLC6A4), catecol-O-methyltranferase (COMT), dopamin receptor D2 (DRD2) and tyrosine hydroxylase 1 (TPH1), genes associated with social bonding and cognitive functions in 19 Finnish families (n = 343 members) with professional musicians and/or active amateurs. All family members were tested for musical aptitude using the auditory structuring ability test (Karma Music test; KMT) and Carl Seashores tests for pitch (SP) and for time (ST). Data on creativity in music (composing, improvising and/or arranging music) was surveyed using a web-based questionnaire. Here we show for the first time that creative functions in music have a strong genetic component (h2 = .84; composing h2 = .40; arranging h2 = .46; improvising h2 = .62) in Finnish multigenerational families. We also show that high music test scores are significantly associated with creative functions in music (p<.0001). We discovered an overall haplotype association with AVPR1A gene (markers RS1 and RS3) and KMT (p = 0.0008; corrected p = 0.00002), SP (p = 0.0261; corrected p = 0.0072) and combined music test scores (COMB) (p = 0.0056; corrected p = 0.0006). AVPR1A haplotype AVR+RS1 further suggested a positive association with ST (p = 0.0038; corrected p = 0.00184) and COMB (p = 0.0083; corrected p = 0.0040) using haplotype-based association test HBAT. The results suggest that the neurobiology of music perception and production is likely to be related to the pathways affecting intrinsic attachment behavior. PMID:19461995
Human neuromagnetic steady-state responses to amplitude-modulated tones, speech, and music.
Lamminmäki, Satu; Parkkonen, Lauri; Hari, Riitta
2014-01-01
Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears' inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs. MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales. The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth. The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.
Changing images of violence in Rap music lyrics: 1979-1997.
Herd, Denise
2009-12-01
Rap music has been at the center of concern about the potential harmful effects of violent media on youth social behavior. This article explores the role of changing images of violence in rap music lyrics from the 1970s to the 1990s. The results indicate that there has been a dramatic and sustained increase in the level of violence in rap music. The percentage of songs mentioning violence increased from 27 per cent during 1979-1984 to 60 per cent during 1994-1997. In addition, portrayals of violence in later songs are viewed in a more positive light as shown by their increased association with glamor, wealth, masculinity, and personal prowess. Additional analyses revealed that genre, specifically gangster rap, is the most powerful predictor of the increased number of violent references in songs. The discussion suggests that violence in rap music has increased in response to the complex interplay of changing social conditions such as the elevated levels of youth violence in the 1980s and changing commercial practices within the music industry.
Musical melody and speech intonation: singing a different tune.
Zatorre, Robert J; Baum, Shari R
2012-01-01
Music and speech are often cited as characteristically human forms of communication. Both share the features of hierarchical structure, complex sound systems, and sensorimotor sequencing demands, and both are used to convey and influence emotions, among other functions [1]. Both music and speech also prominently use acoustical frequency modulations, perceived as variations in pitch, as part of their communicative repertoire. Given these similarities, and the fact that pitch perception and production involve the same peripheral transduction system (cochlea) and the same production mechanism (vocal tract), it might be natural to assume that pitch processing in speech and music would also depend on the same underlying cognitive and neural mechanisms. In this essay we argue that the processing of pitch information differs significantly for speech and music; specifically, we suggest that there are two pitch-related processing systems, one for more coarse-grained, approximate analysis and one for more fine-grained accurate representation, and that the latter is unique to music. More broadly, this dissociation offers clues about the interface between sensory and motor systems, and highlights the idea that multiple processing streams are a ubiquitous feature of neuro-cognitive architectures.
Robust Real-Time Music Transcription with a Compositional Hierarchical Model.
Pesek, Matevž; Leonardis, Aleš; Marolt, Matija
2017-01-01
The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.
A Mixed Methods Sampling Methodology for a Multisite Case Study
ERIC Educational Resources Information Center
Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie
2012-01-01
The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…
Cross-modal associations between materic painting and classical Spanish music.
Albertazzi, Liliana; Canal, Luisa; Micciolo, Rocco
2015-01-01
The study analyses the existence of cross-modal associations in the general population between a series of paintings and a series of clips of classical (guitar) music. Because of the complexity of the stimuli, the study differs from previous analyses conducted on the association between visual and auditory stimuli, which predominantly analyzed single tones and colors by means of psychophysical methods and forced choice responses. More recently, the relation between music and shape has been analyzed in terms of music visualization, or relatively to the role played by emotion in the association, and free response paradigms have also been accepted. In our study, in order to investigate what attributes may be responsible for the phenomenon of the association between visual and auditory stimuli, the clip/painting association was tested in two experiments: the first used the semantic differential on a unidimensional rating scale of adjectives; the second employed a specific methodology based on subjective perceptual judgments in first person account. Because of the complexity of the stimuli, it was decided to have the maximum possible uniformity of style, composition and musical color. The results show that multisensory features expressed by adjectives such as "quick," "agitated," and "strong," and their antonyms "slow," "calm," and "weak" characterized both the visual and auditory stimuli, and that they may have had a role in the associations. The results also suggest that the main perceptual features responsible for the clip/painting associations were hue, lightness, timbre, and musical tempo. Contrary to what was expected, the musical mode usually related to feelings of happiness (major mode), or to feelings of sadness (minor mode), and spatial orientation (vertical and horizontal) did not play a significant role in the association. The consistency of the associations was shown when evaluated on the whole sample, and after considering the different backgrounds and expertise of the subjects. No substantial difference was found between expert and non-expert subjects. The methods used in the experiment (semantic differential and subjective judgements in first person account) corroborated the interpretation of the results as associations due to patterns of qualitative similarity present in stimuli of different sensory modalities and experienced as such by the subjects. The main result of the study consists in showing the existence of cross-modal associations between highly complex stimuli; furthermore, the second experiment employed a specific methodology based on subjective perceptual judgments.
Nonlinear analysis of EEGs of patients with major depression during different emotional states.
Akdemir Akar, Saime; Kara, Sadık; Agambayev, Sümeyra; Bilgiç, Vedat
2015-12-01
Although patients with major depressive disorder (MDD) have dysfunctions in cognitive behaviors and the regulation of emotions, the underlying brain dynamics of the pathophysiology are unclear. Therefore, nonlinear techniques can be used to understand the dynamic behavior of the EEG signals of MDD patients. To investigate and clarify the dynamics of MDD patients׳ brains during different emotional states, EEG recordings were analyzed using nonlinear techniques. The purpose of the present study was to assess whether there are different EEG complexities that discriminate between MDD patients and healthy controls during emotional processing. Therefore, nonlinear parameters, such as Katz fractal dimension (KFD), Higuchi fractal dimension (HFD), Shannon entropy (ShEn), Lempel-Ziv complexity (LZC) and Kolmogorov complexity (KC), were computed from the EEG signals of two groups under different experimental states: noise (negative emotional content) and music (positive emotional content) periods. First, higher complexity values were generated by MDD patients relative to controls. Significant differences were obtained in the frontal and parietal scalp locations using KFD (p<0.001), HFD (p<0.05), and LZC (p=0.05). Second, lower complexities were observed only in the controls when they were subjected to music compared to the resting baseline state in the frontal (p<0.05) and parietal (p=0.005) regions. In contrast, the LZC and KFD values of patients increased in the music period compared to the resting state in the frontal region (p<0.05). Third, the patients׳ brains had higher complexities when they were exposed to noise stimulus than did the controls׳ brains. Moreover, MDD patients׳ negative emotional bias was demonstrated by their higher brain complexities during the noise period than the music stimulus. Additionally, we found that the KFD, HFD and LZC values were more sensitive in discriminating between patients and controls than the ShEn and KC measures, according to the results of ANOVA and ROC calculations. It can be concluded that the nonlinear analysis may be a useful and discriminative tool in investigating the neuro-dynamic properties of the brain in patients with MDD during emotional stimulation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ventegodt, Søren; Hermansen, Tyge Dahl; Kandel, Isack; Merrick, Joav
2008-01-01
The functioning brain behaves like one highly-structured, coherent, informational field. It can be popularly described as a “coherent ball of energy”, making the idea of a local highly-structured quantum field that carries the consciousness very appealing. If that is so, the structure of the experience of music might be a quite unique window into a hidden quantum reality of the brain, and even of life itself. The structure of music is then a mirror of a much more complex, but similar, structure of the energetic field of the working brain. This paper discusses how the perception of music is organized in the human brain with respect to the known tone scales of major and minor. The patterns used by the brain seem to be similar to the overtones of vibrating matter, giving a positive experience of harmonies in major. However, we also like the minor scale, which can explain brain patterns as fractal-like, giving a symmetric “downward reflection” of the major scale into the minor scale. We analyze the implication of beautiful and ugly tones and harmonies for the model. We conclude that when it comes to simple perception of harmonies, the most simple is the most beautiful and the most complex is the most ugly, but in music, even the most disharmonic harmony can be beautiful, if experienced as a part of a dynamic release of musical tension. This can be taken as a general metaphor of painful, yet meaningful, and developing experiences in human life. PMID:18661052
Cortical entrainment to music and its modulation by expertise
Doelling, Keith B.; Poeppel, David
2015-01-01
Recent studies establish that cortical oscillations track naturalistic speech in a remarkably faithful way. Here, we test whether such neural activity, particularly low-frequency (<8 Hz; delta–theta) oscillations, similarly entrain to music and whether experience modifies such a cortical phenomenon. Music of varying tempi was used to test entrainment at different rates. In three magnetoencephalography experiments, we recorded from nonmusicians, as well as musicians with varying years of experience. Recordings from nonmusicians demonstrate cortical entrainment that tracks musical stimuli over a typical range of tempi, but not at tempi below 1 note per second. Importantly, the observed entrainment correlates with performance on a concurrent pitch-related behavioral task. In contrast, the data from musicians show that entrainment is enhanced by years of musical training, at all presented tempi. This suggests a bidirectional relationship between behavior and cortical entrainment, a phenomenon that has not previously been reported. Additional analyses focus on responses in the beta range (∼15–30 Hz)—often linked to delta activity in the context of temporal predictions. Our findings provide evidence that the role of beta in temporal predictions scales to the complex hierarchical rhythms in natural music and enhances processing of musical content. This study builds on important findings on brainstem plasticity and represents a compelling demonstration that cortical neural entrainment is tightly coupled to both musical training and task performance, further supporting a role for cortical oscillatory activity in music perception and cognition. PMID:26504238
Cortical entrainment to music and its modulation by expertise.
Doelling, Keith B; Poeppel, David
2015-11-10
Recent studies establish that cortical oscillations track naturalistic speech in a remarkably faithful way. Here, we test whether such neural activity, particularly low-frequency (<8 Hz; delta-theta) oscillations, similarly entrain to music and whether experience modifies such a cortical phenomenon. Music of varying tempi was used to test entrainment at different rates. In three magnetoencephalography experiments, we recorded from nonmusicians, as well as musicians with varying years of experience. Recordings from nonmusicians demonstrate cortical entrainment that tracks musical stimuli over a typical range of tempi, but not at tempi below 1 note per second. Importantly, the observed entrainment correlates with performance on a concurrent pitch-related behavioral task. In contrast, the data from musicians show that entrainment is enhanced by years of musical training, at all presented tempi. This suggests a bidirectional relationship between behavior and cortical entrainment, a phenomenon that has not previously been reported. Additional analyses focus on responses in the beta range (∼15-30 Hz)-often linked to delta activity in the context of temporal predictions. Our findings provide evidence that the role of beta in temporal predictions scales to the complex hierarchical rhythms in natural music and enhances processing of musical content. This study builds on important findings on brainstem plasticity and represents a compelling demonstration that cortical neural entrainment is tightly coupled to both musical training and task performance, further supporting a role for cortical oscillatory activity in music perception and cognition.
Sad and happy emotion discrimination in music by children with cochlear implants.
Hopyan, Talar; Manno, Francis A M; Papsin, Blake C; Gordon, Karen A
2016-01-01
Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.
ERIC Educational Resources Information Center
Richards, Gretchen M.
2012-01-01
This multisite case study examined how institutional and university counselor policies effectively respond to cyber violent acts. Stake's (2006) multisite case study methodology was used to identify seven themes from current literature. Two sites with four participants were selected. The participants included two counseling directors and the…
Network Theory of Human Decision Making
2012-06-14
must share the same complexity as the brain. This explains why music can be interpreted as the mirror of the brain [3]. According to [11] a...origin and that phenomenon of neural entrainment has the same origin, with a close connection with the phenomenon of cooperation-induced...renewal critical events, Central European Journal of Physics, 7, 421-431 (2009). [3] D. Adams, P. Grigolini, “ Music , New Aesthetic and
Psychophysical basis for consonant musical intervals
NASA Astrophysics Data System (ADS)
Resnick, L.
1981-06-01
A suggestion is made to explain the acceptance of certain musical intervals as consonant and others as dissonant. The proposed explanation involves the relation between the time required to perceive a definite pitch and the period of a complex tone. If the former time is greater than the latter, the tone is consonant; otherwise it is dissonant. A quantitative examination leads to agreement with empirical data.
ERIC Educational Resources Information Center
Gromova, Chulpan R.; Saitova, Lira R.
2016-01-01
The relevance of research problem is due to the need for music teacher with a high level of formation of professional competence determination of the content and principles of an interdisciplinary approach to its formation. The aim of the article lies in development and testing of complex of the pedagogical conditions in formation of professional…
JPRS Report, Soviet Union, Kommunist.
1988-01-22
despite all trials. Today our literature, graphic arts, music , motion pictures and theater have their own Soviet clas- sics which embody the best...of the most sensitive and emotional aspects of social life: increased impressionability and responsiveness and a sharpened moral feeling "func- tion... musical life is becoming richer and filled with more individuality. It is as though culture and art are being restored in their truly complex
Happy creativity: Listening to happy music facilitates divergent thinking
Ferguson, Sam
2017-01-01
Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition—the ability to come up with creative ideas, problem solutions and products—is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to ‘happy music’ (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed. PMID:28877176
Martens, Marilee A; Jungers, Melissa K; Steele, Anita L
2011-09-01
Williams syndrome (WS) is a neurogenetic developmental disorder characterized by an increased affinity for music, deficits in verbal memory, and atypical brain development. Music has been shown to improve verbal memory in typical individuals as well as those with learning difficulties, but no studies have examined this relationship in WS. The aim of our two studies was to examine whether music can enhance verbal memory in individuals with WS. In Study 1, we presented a memory task of eight spoken or sung sentences that described an animal and identified its group name to 38 individuals with WS. Study 2, involving another group of individuals with WS (n=38), included six spoken or sung sentences that identified an animal group name. In both studies, those who had participated in formal music lessons scored significantly better on the verbal memory task when the sentences were sung than when they were spoken. Those who had not taken formal lessons showed no such benefit. We also found that increased enjoyment of music and heightened emotional reactions to music did not impact performance on the memory task. These compelling findings provide the first evidence that musical experience may enhance verbal memory in individuals with WS and shed more light on the complex relationship between aspects of cognition and altered neurodevelopment in this unique disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Barrett, Frederick S; Janata, Petr
2016-10-01
Nostalgia is an emotion that is most commonly associated with personally and socially relevant memories. It is primarily positive in valence and is readily evoked by music. It is also an idiosyncratic experience that varies between individuals based on affective traits. We identified frontal, limbic, paralimbic, and midbrain brain regions in which the strength of the relationship between ratings of nostalgia evoked by music and blood-oxygen-level-dependent (BOLD) signal was predicted by affective personality measures (nostalgia proneness and the sadness scale of the Affective Neuroscience Personality Scales) that are known to modulate the strength of nostalgic experiences. We also identified brain areas including the inferior frontal gyrus, substantia nigra, cerebellum, and insula in which time-varying BOLD activity correlated more strongly with the time-varying tonal structure of nostalgia-evoking music than with music that evoked no or little nostalgia. These findings illustrate one way in which the reward and emotion regulation networks of the brain are recruited during the experiencing of complex emotional experiences triggered by music. These findings also highlight the importance of considering individual differences when examining the neural responses to strong and idiosyncratic emotional experiences. Finally, these findings provide a further demonstration of the use of time-varying stimulus-specific information in the investigation of music-evoked experiences. Copyright © 2016 Elsevier Ltd. All rights reserved.
Generaal, Ellen; Vogelzangs, Nicole; Macfarlane, Gary J; Geenen, Rinie; Smit, Johannes H; de Geus, Eco J C N; Penninx, Brenda W J H; Dekker, Joost
2016-05-01
Dysregulated biological stress systems and adverse life events, independently and in interaction, have been hypothesised to initiate chronic pain. We examine whether (1) function of biological stress systems, (2) adverse life events, and (3) their combination predict the onset of chronic multisite musculoskeletal pain. Subjects (n=2039) of the Netherlands Study of Depression and Anxiety, free from chronic multisite musculoskeletal pain at baseline, were identified using the Chronic Pain Grade Questionnaire and followed up for the onset of chronic multisite musculoskeletal pain over 6 years. Baseline assessment of biological stress systems comprised function of the hypothalamic-pituitary-adrenal axis (1-h cortisol awakening response, evening levels, postdexamethasone levels), the immune system (basal and lipopolysaccharide-stimulated inflammation) and the autonomic nervous system (heart rate, pre-ejection period, SD of the normal-to-normal interval, respiratory sinus arrhythmia). The number of recent adverse life events was assessed at baseline using the List of Threatening Events Questionnaire. Hypothalamic-pituitary-adrenal axis, immune system and autonomic nervous system functioning was not associated with onset of chronic multisite musculoskeletal pain, either by itself or in interaction with adverse life events. Adverse life events did predict onset of chronic multisite musculoskeletal pain (HR per event=1.14, 95% CI 1.04 to 1.24, p=0.005). This longitudinal study could not confirm that dysregulated biological stress systems increase the risk of developing chronic multisite musculoskeletal pain. Adverse life events were a risk factor for the onset of chronic multisite musculoskeletal pain, suggesting that psychosocial factors play a role in triggering the development of this condition. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Oikkonen, J.; Huang, Y.; Onkamo, P.; Ukkola-Vuoti, L.; Raijas, P.; Karma, K.; Vieland, V. J.; Järvelä, I.
2014-01-01
Humans have developed the perception, production and processing of sounds into the art of music. A genetic contribution to these skills of musical aptitude has long been suggested. We performed a genome-wide scan in 76 pedigrees (767 individuals) characterized for the ability to discriminate pitch (SP), duration (ST) and sound patterns (KMT), which are primary capacities for music perception. Using the Bayesian linkage and association approach implemented in program package KELVIN, especially designed for complex pedigrees, several SNPs near genes affecting the functions of the auditory pathway and neurocognitive processes were identified. The strongest association was found at 3q21.3 (rs9854612) with combined SP, ST and KMT test scores (COMB). This region is located a few dozen kilobases upstream of the GATA binding protein 2 (GATA2) gene. GATA2 regulates the development of cochlear hair cells and the inferior colliculus (IC), which are important in tonotopic mapping. The highest probability of linkage was obtained for phenotype SP at 4p14, located next to the region harboring the protocadherin 7 gene, PCDH7. Two SNPs rs13146789 and rs13109270 of PCDH7 showed strong association. PCDH7 has been suggested to play a role in cochlear and amygdaloid complexes. Functional class analysis showed that inner ear and schizophrenia related genes were enriched inside the linked regions. This study is the first to show the importance of auditory pathway genes in musical aptitude. PMID:24614497
Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; MacPherson, Megan; Weber-Fox, Christine
2012-01-01
Using electrophysiology, we have examined two questions in relation to musical training – namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in sounds’ timbre. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 ERP component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We, therefore, conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians’ accuracy tended to suffer less from the change in sounds’ timbre, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. PMID:23301775
Oikkonen, J; Huang, Y; Onkamo, P; Ukkola-Vuoti, L; Raijas, P; Karma, K; Vieland, V J; Järvelä, I
2015-02-01
Humans have developed the perception, production and processing of sounds into the art of music. A genetic contribution to these skills of musical aptitude has long been suggested. We performed a genome-wide scan in 76 pedigrees (767 individuals) characterized for the ability to discriminate pitch (SP), duration (ST) and sound patterns (KMT), which are primary capacities for music perception. Using the Bayesian linkage and association approach implemented in program package KELVIN, especially designed for complex pedigrees, several single nucleotide polymorphisms (SNPs) near genes affecting the functions of the auditory pathway and neurocognitive processes were identified. The strongest association was found at 3q21.3 (rs9854612) with combined SP, ST and KMT test scores (COMB). This region is located a few dozen kilobases upstream of the GATA binding protein 2 (GATA2) gene. GATA2 regulates the development of cochlear hair cells and the inferior colliculus (IC), which are important in tonotopic mapping. The highest probability of linkage was obtained for phenotype SP at 4p14, located next to the region harboring the protocadherin 7 gene, PCDH7. Two SNPs rs13146789 and rs13109270 of PCDH7 showed strong association. PCDH7 has been suggested to play a role in cochlear and amygdaloid complexes. Functional class analysis showed that inner ear and schizophrenia-related genes were enriched inside the linked regions. This study is the first to show the importance of auditory pathway genes in musical aptitude.
Exploiting Multisite Gateway and pENFRUIT plasmid collection for fruit genetic engineering.
Estornell, Leandro H; Granell, Antonio; Orzaez, Diego
2012-01-01
MultiSite Gateway cloning techniques based on homologous recombination facilitate the combinatorial assembly of basic genetic pieces (i.e., promoters, CDS, and terminators) into gene expression or gene silencing cassettes. pENFRUIT is a collection of MultiSite Triple Gateway Entry vectors dedicated to genetic engineering in fruits. It comprises a number of fruit-operating promoters as well as C-terminal tags adapted to the Gateway standard. In this way, flanking regulatory/labeling sequences can be easily Gateway-assembled with a given gene of interest for its ectopic expression or silencing in fruits. The resulting gene constructs can be analyzed in stable transgenic plants or in transient expression assays, the latter allowing fast testing of the increasing number of combinations arising from MultiSite methodology. A detailed description of the use of MultiSite cloning methodology for the assembly of pENFRUIT elements is presented.
Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation
Limb, Charles J.; Braun, Allen R.
2008-01-01
To investigate the neural substrates that underlie spontaneous musical performance, we examined improvisation in professional jazz pianists using functional MRI. By employing two paradigms that differed widely in musical complexity, we found that improvisation (compared to production of over-learned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensorimotor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity. PMID:18301756
Neural substrates of spontaneous musical performance: an FMRI study of jazz improvisation.
Limb, Charles J; Braun, Allen R
2008-02-27
To investigate the neural substrates that underlie spontaneous musical performance, we examined improvisation in professional jazz pianists using functional MRI. By employing two paradigms that differed widely in musical complexity, we found that improvisation (compared to production of over-learned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensorimotor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang,Z.; Fenter, P.; Cheng, L.
2006-01-01
The X-ray standing wave technique was used to probe the sensitivity of Zn{sup 2+} and Sr{sup 2+} ion adsorption to changes in both the adsorbed ion coverage and the background electrolyte species and concentrations at the rutile ({alpha}-TiO{sub 2}) (110)-aqueous interface. Measurements were made with various background electrolytes (NaCl, NaTr, RbCl, NaBr) at concentrations as high as 1 m. The results demonstrate that Zn{sub 2+} and Sr{sub 2+} reside primarily in the condensed layer and that the ion heights above the Ti-O surface plane are insensitive to ionic strength and the choice of background electrolyte (with <0.1 Angstroms changes overmore » the full compositional range). The lack of any specific anion coadsorption upon probing with Br{sup -}, coupled with the insensitivity of Zn{sup 2+} and Sr{sup 2+} cation heights to changes in the background electrolyte, implies that anions do not play a significant role in the adsorption of these divalent metal ions to the rutile (110) surface. Absolute ion coverage measurements for Zn{sup 2+} and Sr{sup 2+} show a maximum Stern-layer coverage of {approx}0.5 monolayer, with no significant variation in height as a function of Stern-layer coverage. These observations are discussed in the context of Gouy-Chapman-Stern models of the electrical double layer developed from macroscopic sorption and pH-titration studies of rutile powder suspensions. Direct comparison between these experimental observations and the MUltiSIte Complexation (MUSIC) model predictions of cation surface coverage as a function of ionic strength revealed good agreement between measured and predicted surface coverages with no adjustable parameters.« less
The Role of the Baldwin Effect in the Evolution of Human Musicality.
Podlipniak, Piotr
2017-01-01
From the biological perspective human musicality is the term referred to as a set of abilities which enable the recognition and production of music. Since music is a complex phenomenon which consists of features that represent different stages of the evolution of human auditory abilities, the question concerning the evolutionary origin of music must focus mainly on music specific properties and their possible biological function or functions. What usually differentiates music from other forms of human sound expressions is a syntactically organized structure based on pitch classes and rhythmic units measured in reference to musical pulse. This structure is an auditory (not acoustical) phenomenon, meaning that it is a human-specific interpretation of sounds achieved thanks to certain characteristics of the nervous system. There is historical and cross-cultural diversity of this structure which indicates that learning is an important part of the development of human musicality. However, the fact that there is no culture without music, the syntax of which is implicitly learned and easily recognizable, suggests that human musicality may be an adaptive phenomenon. If the use of syntactically organized structure as a communicative phenomenon were adaptive it would be only in circumstances in which this structure is recognizable by more than one individual. Therefore, there is a problem to explain the adaptive value of an ability to recognize a syntactically organized structure that appeared accidentally as the result of mutation or recombination in an environment without a syntactically organized structure. The possible solution could be explained by the Baldwin effect in which a culturally invented trait is transformed into an instinctive trait by the means of natural selection. It is proposed that in the beginning musical structure was invented and learned thanks to neural plasticity. Because structurally organized music appeared adaptive (phenotypic adaptation) e.g., as a tool of social consolidation, our predecessors started to spend a lot of time and energy on music. In such circumstances, accidentally one individual was born with the genetically controlled development of new neural circuitry which allowed him or her to learn music faster and with less energy use.
The Role of the Baldwin Effect in the Evolution of Human Musicality
Podlipniak, Piotr
2017-01-01
From the biological perspective human musicality is the term referred to as a set of abilities which enable the recognition and production of music. Since music is a complex phenomenon which consists of features that represent different stages of the evolution of human auditory abilities, the question concerning the evolutionary origin of music must focus mainly on music specific properties and their possible biological function or functions. What usually differentiates music from other forms of human sound expressions is a syntactically organized structure based on pitch classes and rhythmic units measured in reference to musical pulse. This structure is an auditory (not acoustical) phenomenon, meaning that it is a human-specific interpretation of sounds achieved thanks to certain characteristics of the nervous system. There is historical and cross-cultural diversity of this structure which indicates that learning is an important part of the development of human musicality. However, the fact that there is no culture without music, the syntax of which is implicitly learned and easily recognizable, suggests that human musicality may be an adaptive phenomenon. If the use of syntactically organized structure as a communicative phenomenon were adaptive it would be only in circumstances in which this structure is recognizable by more than one individual. Therefore, there is a problem to explain the adaptive value of an ability to recognize a syntactically organized structure that appeared accidentally as the result of mutation or recombination in an environment without a syntactically organized structure. The possible solution could be explained by the Baldwin effect in which a culturally invented trait is transformed into an instinctive trait by the means of natural selection. It is proposed that in the beginning musical structure was invented and learned thanks to neural plasticity. Because structurally organized music appeared adaptive (phenotypic adaptation) e.g., as a tool of social consolidation, our predecessors started to spend a lot of time and energy on music. In such circumstances, accidentally one individual was born with the genetically controlled development of new neural circuitry which allowed him or her to learn music faster and with less energy use. PMID:29056895
Armony, Jorge L; Aubé, William; Angulo-Perkins, Arafat; Peretz, Isabelle; Concha, Luis
2015-04-23
Several studies have identified, using functional magnetic resonance imaging (fMRI), a region within the superior temporal gyrus that preferentially responds to musical stimuli. However, in most cases, significant responses to other complex stimuli, particularly human voice, were also observed. Thus, it remains unknown if the same neurons respond to both stimulus types, albeit with different strengths, or whether the responses observed with fMRI are generated by distinct, overlapping neural populations. To address this question, we conducted an fMRI experiment in which short music excerpts and human vocalizations were presented in a pseudo-random order. Critically, we performed an adaptation-based analysis in which responses to the stimuli were analyzed taking into account the category of the preceding stimulus. Our results confirm the presence of a region in the anterior STG that responds more strongly to music than voice. Moreover, we found a music-specific adaptation effect in this area, consistent with the existence of music-preferred neurons. Lack of differences between musicians and non-musicians argues against an expertise effect. These findings provide further support for neural separability between music and speech within the temporal lobe. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Omar, Rohani; Henley, Susie M.D.; Bartlett, Jonathan W.; Hailstone, Julia C.; Gordon, Elizabeth; Sauter, Disa A.; Frost, Chris; Scott, Sophie K.; Warren, Jason D.
2011-01-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. PMID:21385617
Defining the biological bases of individual differences in musicality.
Gingras, Bruno; Honing, Henkjan; Peretz, Isabelle; Trainor, Laurel J; Fisher, Simon E
2015-03-19
Advances in molecular technologies make it possible to pinpoint genomic factors associated with complex human traits. For cognition and behaviour, identification of underlying genes provides new entry points for deciphering the key neurobiological pathways. In the past decade, the search for genetic correlates of musicality has gained traction. Reports have documented familial clustering for different extremes of ability, including amusia and absolute pitch (AP), with twin studies demonstrating high heritability for some music-related skills, such as pitch perception. Certain chromosomal regions have been linked to AP and musical aptitude, while individual candidate genes have been investigated in relation to aptitude and creativity. Most recently, researchers in this field started performing genome-wide association scans. Thus far, studies have been hampered by relatively small sample sizes and limitations in defining components of musicality, including an emphasis on skills that can only be assessed in trained musicians. With opportunities to administer standardized aptitude tests online, systematic large-scale assessment of musical abilities is now feasible, an important step towards high-powered genome-wide screens. Here, we offer a synthesis of existing literatures and outline concrete suggestions for the development of comprehensive operational tools for the analysis of musical phenotypes. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
What does music express? Basic emotions and beyond.
Juslin, Patrik N
2013-01-01
Numerous studies have investigated whether music can reliably convey emotions to listeners, and-if so-what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of "multiple layers" of musical expression of emotions. The "core" layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this "core" layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions-though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.
Omar, Rohani; Henley, Susie M D; Bartlett, Jonathan W; Hailstone, Julia C; Gordon, Elizabeth; Sauter, Disa A; Frost, Chris; Scott, Sophie K; Warren, Jason D
2011-06-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. Copyright © 2011 Elsevier Inc. All rights reserved.
The "silent" imprint of musical training.
Klein, Carina; Liem, Franziskus; Hänggi, Jürgen; Elmer, Stefan; Jäncke, Lutz
2016-02-01
Playing a musical instrument at a professional level is a complex multimodal task requiring information integration between different brain regions supporting auditory, somatosensory, motor, and cognitive functions. These kinds of task-specific activations are known to have a profound influence on both the functional and structural architecture of the human brain. However, until now, it is widely unknown whether this specific imprint of musical practice can still be detected during rest when no musical instrument is used. Therefore, we applied high-density electroencephalography and evaluated whole-brain functional connectivity as well as small-world topologies (i.e., node degree) during resting state in a sample of 15 professional musicians and 15 nonmusicians. As expected, musicians demonstrate increased intra- and interhemispheric functional connectivity between those brain regions that are typically involved in music perception and production, such as the auditory, the sensorimotor, and prefrontal cortex as well as Broca's area. In addition, mean connectivity within this specific network was positively related to musical skill and the total number of training hours. Thus, we conclude that musical training distinctively shapes intrinsic functional network characteristics in such a manner that its signature can still be detected during a task-free condition. Hum Brain Mapp 37:536-546, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Wilkins, R. W.; Hodges, D. A.; Laurienti, P. J.; Steen, M.; Burdette, J. H.
2014-01-01
Most people choose to listen to music that they prefer or ‘like’ such as classical, country or rock. Previous research has focused on how different characteristics of music (i.e., classical versus country) affect the brain. Yet, when listening to preferred music—regardless of the type—people report they often experience personal thoughts and memories. To date, understanding how this occurs in the brain has remained elusive. Using network science methods, we evaluated differences in functional brain connectivity when individuals listened to complete songs. We show that a circuit important for internally-focused thoughts, known as the default mode network, was most connected when listening to preferred music. We also show that listening to a favorite song alters the connectivity between auditory brain areas and the hippocampus, a region responsible for memory and social emotion consolidation. Given that musical preferences are uniquely individualized phenomena and that music can vary in acoustic complexity and the presence or absence of lyrics, the consistency of our results was unexpected. These findings may explain why comparable emotional and mental states can be experienced by people listening to music that differs as widely as Beethoven and Eminem. The neurobiological and neurorehabilitation implications of these results are discussed. PMID:25167363
Shared worlds: multi-sited ethnography and nursing research.
Molloy, Luke; Walker, Kim; Lakeman, Richard
2017-03-22
Background Ethnography, originally developed for the study of supposedly small-scale societies, is now faced with an increasingly mobile, changing and globalised world. Cultural identities can exist without reference to a specific location and extend beyond regional and national boundaries. It is therefore no longer imperative that the sole object of the ethnographer's practice should be a geographically bounded site. Aim To present a critical methodological review of multi-sited ethnography. Discussion Understanding that it can no longer be taken with any certainty that location alone determines culture, multi-sited ethnography provides a method of contextualising multi-sited social phenomena. The method enables researchers to examine social phenomena that are simultaneously produced in different locations. It has been used to undertake cultural analysis of diverse areas such as organ trafficking, global organisations, technologies and anorexia. Conclusion The authors contend that multi-sited ethnography is particularly suited to nursing research as it provides researchers with an ethnographic method that is more relevant to the interconnected world of health and healthcare services. Implications for practice Multi-sited ethnography provides nurse researchers with an approach to cultural analysis in areas such as the social determinants of health, healthcare services and the effects of health policies across multiple locations.
The complex network of the Brazilian Popular Music
NASA Astrophysics Data System (ADS)
de Lima e Silva, D.; Medeiros Soares, M.; Henriques, M. V. C.; Schivani Alves, M. T.; de Aguiar, S. G.; de Carvalho, T. P.; Corso, G.; Lucena, L. S.
2004-02-01
We study the Brazilian Popular Music in a network perspective. We call the Brazilian Popular Music Network, BPMN, the graph where the vertices are the song writers and the links are determined by the existence of at least a common singer. The linking degree distribution of such graph shows power law and exponential regions. The exponent of the power law is compatible with the values obtained by the evolving network algorithms seen in the literature. The average path length of the BPMN is similar to the correspondent random graph, its clustering coefficient, however, is significantly larger. These results indicate that the BPMN forms a small-world network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.
1998-01-05
An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.
Niolon, Phyllis Holditch; Taylor, Bruce G; Latzman, Natasha E; Vivolo-Kantor, Alana M; Valle, Linda Anne; Tharp, Andra T
2016-03-01
This paper describes the multisite, longitudinal cluster randomized controlled trial (RCT) design of the evaluation of the Dating Matters: Strategies to Promote Healthy Relationships initiative, and discusses challenges faced in conducting this evaluation. Health departments in 4 communities are partnering with middle schools in high-risk, urban communities to implement 2 models of teen dating violence (TDV) prevention over 4 years. Schools were randomized to receive either the Dating Matters comprehensive strategy or the "standard of care" strategy (an existing, evidence-based TDV prevention curriculum). Our design permits comparison of the relative effectiveness of the comprehensive and standard of care strategies. Multiple cohorts of students from 46 middle schools are surveyed in middle school and high school, and parents and educators from participating schools are also surveyed. Challenges discussed in conducting a multisite RCT include site variability, separation of implementation and evaluation responsibilities, school retention, parent engagement in research activities, and working within the context of high-risk urban schools and communities. We discuss the strengths and weaknesses of our approaches to these challenges in the hopes of informing future research. Despite multiple challenges, the design of the Dating Matters evaluation remains strong. We hope this paper provides researchers who are conducting complex evaluations of behavioral interventions with thoughtful discussion of the challenges we have faced and potential solutions to such challenges.
Streiner, David L; Adair, Carol; Aubry, Tim; Barker, Jayne; Distasio, Jino; Hwang, Stephen W; Komaroff, Janina; Latimer, Eric; Somers, Julian; Zabkiewicz, Denise M
2011-01-01
Introduction Housing First is a complex housing and support intervention for homeless individuals with mental health problems. It has a sufficient knowledge base and interest to warrant a test of wide-scale implementation in various settings. This protocol describes the quantitative design of a Canadian five city, $110 million demonstration project and provides the rationale for key scientific decisions. Methods A pragmatic, mixed methods, multi-site field trial of the effectiveness of Housing First in Vancouver, Winnipeg, Toronto, Montreal and Moncton, is randomising approximately 2500 participants, stratified by high and moderate need levels, into intervention and treatment as usual groups. Quantitative outcome measures are being collected over a 2-year period and a qualitative process evaluation is being completed. Primary outcomes are housing stability, social functioning and, for the economic analyses, quality of life. Hierarchical linear modelling is the primary data analytic strategy. Ethics and dissemination Research ethics board approval has been obtained from 11 institutions and a safety and adverse events committee is in place. The results of the multi-site analyses of outcomes at 12 months and 2 years will be reported in a series of core scientific journal papers. Extensive knowledge exchange activities with non-academic audiences will occur throughout the duration of the project. Trial registration number This study has been registered with the International Standard Randomised Control Trial Number Register and assigned ISRCTN42520374. PMID:22102645
Niolon, Phyllis Holditch; Taylor, Bruce G.; Latzman, Natasha E.; Vivolo-Kantor, Alana M.; Valle, Linda Anne; Tharp, Andra T.
2018-01-01
Objective This paper describes the multisite, longitudinal cluster randomized controlled trial (RCT) design of the evaluation of the Dating Matters: Strategies to Promote Healthy Relationships initiative, and discusses challenges faced in conducting this evaluation. Method Health departments in 4 communities are partnering with middle schools in high-risk, urban communities to implement 2 models of teen dating violence (TDV) prevention over 4 years. Schools were randomized to receive either the Dating Matters comprehensive strategy or the “standard of care” strategy (an existing, evidence-based TDV prevention curriculum). Our design permits comparison of the relative effectiveness of the comprehensive and standard of care strategies. Multiple cohorts of students from 46 middle schools are surveyed in middle school and high school, and parents and educators from participating schools are also surveyed. Results Challenges discussed in conducting a multisite RCT include site variability, separation of implementation and evaluation responsibilities, school retention, parent engagement in research activities, and working within the context of high-risk urban schools and communities. We discuss the strengths and weaknesses of our approaches to these challenges in the hopes of informing future research. Conclusions Despite multiple challenges, the design of the Dating Matters evaluation remains strong. We hope this paper provides researchers who are conducting complex evaluations of behavioral interventions with thoughtful discussion of the challenges we have faced and potential solutions to such challenges. PMID:29607239
NASA Astrophysics Data System (ADS)
Rabinowitch, Tal-Chen
2015-12-01
Clarke, DeNora and Vuoskoski have carried out a formidable task of preparing a profound and encompassing review [3] that brings together two highly complex and multifaceted concepts, empathy and music, as well as a specific aspect of empathy that is highly relevant to society, cultural understanding. They have done an extraordinary service in synthesizing the growing, but still highly fragmented body of work in this area. At the heart of this review lies an intricate model that the authors develop, which accounts for a variety of mechanisms and cognitive processes underlying musical empathic engagement. In what follows I would like to first point out what I think is unique about this model. Then, I will briefly describe the need for including in any such model a developmental angle.
Toward a dynamical theory of body movement in musical performance
Demos, Alexander P.; Chaffin, Roger; Kant, Vivek
2014-01-01
Musicians sway expressively as they play in ways that seem clearly related to the music, but quantifying the relationship has been difficult. We suggest that a complex systems framework and its accompanying tools for analyzing non-linear dynamical systems can help identify the motor synergies involved. Synergies are temporary assemblies of parts that come together to accomplish specific goals. We assume that the goal of the performer is to convey musical structure and expression to the audience and to other performers. We provide examples of how dynamical systems tools, such as recurrence quantification analysis (RQA), can be used to examine performers' movements and relate them to the musical structure and to the musician's expressive intentions. We show how detrended fluctuation analysis (DFA) can be used to identify synergies and discover how they are affected by the performer's expressive intentions. PMID:24904490
Preissler, Pia; Kordovan, Sarah; Ullrich, Anneke; Bokemeyer, Carsten; Oechsle, Karin
2016-05-12
Research has shown positive effects of music therapy on the physical and mental well-being of terminally ill patients. This study aimed to identify favored subjects and psychosocial needs of terminally ill cancer patients during music therapy and associated factors. Forty-one Patients receiving specialized inpatient palliative care prospectively performed a music therapy intervention consisting of at least two sessions (total number of sessions: 166; per patient average: 4, range, 2-10). Applied music therapy methods and content were not pre-determined. Therapeutic subjects and psychosocial needs addressed in music therapy sessions were identified from prospective semi-structured "field notes" using qualitative content analysis. Patient- and treatment-related characteristics as well as factors related to music and music therapy were assessed by questionnaire or retrieved from medical records. Seven main categories of subjects were identified: "condition, treatment, further care", "coping with palliative situation", "emotions and feelings", "music and music therapy", "biography", "social environment", and "death, dying, and spiritual topics". Patients addressed an average of 4.7 different subjects (range, 1-7). Some subjects were associated with gender (p = .022) and prior impact of music in patients' life (p = .012). The number of subjects per session was lower when receptive music therapy methods were used (p = .040). Psychosocial needs were categorized into nine main dimensions: "relaxing and finding comfort", "communication and dialogue", "coping and activation of internal resources", "activity and vitality", "finding expression", "sense of self and reflection", "finding emotional response", "defocusing and diversion", and "structure and hold". Patients expressed an average of 4.9 psychosocial needs (range, 1-8). Needs were associated with age, parallel art therapy (p = .010), role of music in patient's life (p = .021), and the applied music therapy method (p = .012). Seven main categories of therapeutically relevant subjects and nine dimensions of psychosocial needs could be identified when music therapy was delivered to terminally ill cancer patients. Results showed that patients with complex psychosocial situations addressed an average number of five subjects and needs, respectively. Some socio-demographic factors, the role of music in patient's lives and the applied music therapy methods may be related with the kind and number of expressed subjects and needs.
Kang, Robert; Nimmons, Grace Liu; Drennan, Ward; Longnion, Jeff; Ruffin, Chad; Nie, Kaibao; Won, Jong Ho; Worman, Tina; Yueh, Bevan; Rubinstein, Jay
2009-08-01
Assessment of cochlear implant outcomes centers around speech discrimination. Despite dramatic improvements in speech perception, music perception remains a challenge for most cochlear implant users. No standardized test exists to quantify music perception in a clinically practical manner. This study presents the University of Washington Clinical Assessment of Music Perception (CAMP) test as a reliable and valid music perception test for English-speaking, adult cochlear implant users. Forty-two cochlear implant subjects were recruited from the University of Washington Medical Center cochlear implant program and referred by two implant manufacturers. Ten normal-hearing volunteers were drawn from the University of Washington Medical Center and associated campuses. A computer-driven, self-administered test was developed to examine three specific aspects of music perception: pitch direction discrimination, melody recognition, and timbre recognition. The pitch subtest used an adaptive procedure to determine just-noticeable differences for complex tone pitch direction discrimination within the range of 1 to 12 semitones. The melody and timbre subtests assessed recognition of 12 commonly known melodies played with complex tones in an isochronous manner and eight musical instruments playing an identical five-note sequence, respectively. Testing was repeated for cochlear implant subjects to evaluate test-retest reliability. Normal-hearing volunteers were also tested to demonstrate differences in performance in the two populations. For cochlear implant subjects, pitch direction discrimination just-noticeable differences ranged from 1 to 8.0 semitones (Mean = 3.0, SD = 2.3). Melody and timbre recognition ranged from 0 to 94.4% correct (mean = 25.1, SD = 22.2) and 20.8 to 87.5% (mean = 45.3, SD = 16.2), respectively. Each subtest significantly correlated at least moderately with both Consonant-Nucleus-Consonant (CNC) word recognition scores and spondee recognition thresholds in steady state noise and two-talker babble. Intraclass coefficients demonstrating test-retest correlations for pitch, melody, and timbre were 0.85, 0.92, and 0.69, respectively. Normal-hearing volunteers had a mean pitch direction discrimination threshold of 1.0 semitone, the smallest interval tested, and mean melody and timbre recognition scores of 87.5 and 94.2%, respectively. The CAMP test discriminates a wide range of music perceptual ability in cochlear implant users. Moderate correlations were seen between music test results and both Consonant-Nucleus-Consonant word recognition scores and spondee recognition thresholds in background noise. Test-retest reliability was moderate to strong. The CAMP test provides a reliable and valid metric for a clinically practical, standardized evaluation of music perception in adult cochlear implant users.
McCaskie, Andrew W; Kenny, Dianna T; Deshmukh, Sandeep
2011-05-02
Trainee surgeons must acquire expert status in the context of reduced hours, reduced operating room time and the need to learn complex skills involving screen-mediated techniques, computers and robotics. Ever more sophisticated surgical simulation strategies have been helpful in providing surgeons with the opportunity to practise, but not all of these strategies are widely available. Similarities in the motor skills required in skilled musical performance and surgery suggest that models of music learning, and particularly skilled motor development, may be applicable in training surgeons. More attention should be paid to factors associated with optimal arousal and optimal performance in surgical training - lessons learned from helping anxious musicians optimise performance and manage anxiety may also be transferable to trainee surgeons. The ways in which the trainee surgeon moves from novice to expert need to be better understood so that this process can be expedited using current knowledge in other disciplines requiring the performance of complex fine motor tasks with high cognitive load under pressure.
Mukhopadhyay, Himadri; de Wet, Ben; Clemens, Lara; Maini, Philip K; Allard, Jun; van der Merwe, P Anton; Dushek, Omer
2016-04-26
Multisite phosphorylation is ubiquitous in cellular signaling and is thought to provide signaling proteins with additional regulatory mechanisms. Indeed, mathematical models have revealed a large number of mechanisms by which multisite phosphorylation can produce switchlike responses. The T cell antigen receptor (TCR) is a multisubunit receptor on the surface of T cells that is a prototypical multisite substrate as it contains 20 sites that are distributed on 10 conserved immunoreceptor tyrosine-based activation motifs (ITAMs). The TCR ζ-chain is a homodimer subunit that contains six ITAMs (12 sites) and exhibits a number of properties that are predicted to be sufficient for a switchlike response. We have used cellular reconstitution to systematically study multisite phosphorylation of the TCR ζ-chain. We find that multisite phosphorylation proceeds by a nonsequential random mechanism, and find no evidence that multiple ITAMs modulate a switchlike response but do find that they alter receptor potency and maximum phosphorylation. Modulation of receptor potency can be explained by a reduction in molecular entropy of the disordered ζ-chain upon phosphorylation. We further find that the tyrosine kinase ZAP-70 increases receptor potency but does not modulate the switchlike response. In contrast to other multisite proteins, where phosphorylations act in strong concert to modulate protein function, we suggest that the multiple ITAMs on the TCR function mainly to amplify subsequent signaling. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Introduction to the special issue on college student mental health.
Castillo, Linda G; Schwartz, Seth J
2013-04-01
This article provides an introduction to the special issue on college student mental health. It gives an overview of the establishment of the Multi-Site University Study of Identity and Culture (MUSIC) collaborative by a group of national experts on culture and identity. Information about the procedures used to collect a nationally represented sample of college students are provided. Data were collected from 30 university sites across the United States. The sample comprised 10,573 undergraduate college students, of which 73% were women, 63% White, 9% African American/Black, 14% Latino/Hispanic, 13% Asian American, and 1% Other. The special issue comprises a compilation of 8 studies that used the dataset specifically created to examine the issues of emerging adults, culture, and identity. Student mental health problems are a growing concern on college campuses. Studies covered in this special issue have implications for policy development regarding college alcohol use and traumatic victimization, include attention to underrepresented minority and immigrant groups on college campuses, and focus on positive as well as pathological aspects of the college experience. © 2013 Wiley Periodicals, Inc.
Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users.
Buyens, Wim; van Dijk, Bas; Moonen, Marc; Wouters, Jan
2018-01-01
Although for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory. The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants' speech and pitch detection performance was investigated. During the initial visit preceding the take-home test, the participants' speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs. Twelve postlingually deafened CI users participated in the study. The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of "Genre." The correlations were investigated with Pearson's and Spearman's correlation coefficients. All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of "Genre" was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres. The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and hence might be a good tool for music listening, training, or rehabilitation for CI users. American Academy of Audiology
"Brown Paper Packages"? A Sociocultural Perspective on Young Children's Ideas in Science
ERIC Educational Resources Information Center
Robbins, Jill
2005-01-01
How do we see young children's thinking in science? Is it, as much previous research has led us to believe, that their ideas can be neatly boxed like "brown paper packages tied up with strings"--as the song from "The Sound of Music" goes? Or are their ideas like "wild geese that fly with the moon on their wings" ("Sound of Music"): fluid, complex,…
Dynamic Data Driven Operator Error Early Warning System
2015-08-13
calibrations, participants started to do the experiment with a 3-minute baselining session. They were rested and listened to the music Bachs Harpsichord...conditions. Further, training data is not necessary to perform the PCA analysis presented here. The second method is the least squares complex...approach for operational modal analysis. In Proceedings of the 25th IMAC, Orlando (FL), USA, 2007. [4] Linda Chlan. Effectiveness of a music therapy
Modulation of EEG Theta Band Signal Complexity by Music Therapy
NASA Astrophysics Data System (ADS)
Bhattacharya, Joydeep; Lee, Eun-Jeong
The primary goal of this study was to investigate the impact of monochord (MC) sounds, a type of archaic sounds used in music therapy, on the neural complexity of EEG signals obtained from patients undergoing chemotherapy. The secondary goal was to compare the EEG signal complexity values for monochords with those for progressive muscle relaxation (PMR), an alternative therapy for relaxation. Forty cancer patients were randomly allocated to one of the two relaxation groups, MC and PMR, over a period of six months; continuous EEG signals were recorded during the first and last sessions. EEG signals were analyzed by applying signal mode complexity, a measure of complexity of neuronal oscillations. Across sessions, both groups showed a modulation of complexity of beta-2 band (20-29Hz) at midfrontal regions, but only MC group showed a modulation of complexity of theta band (3.5-7.5Hz) at posterior regions. Therefore, the neuronal complexity patterns showed different changes in EEG frequency band specific complexity resulting in two different types of interventions. Moreover, the different neural responses to listening to monochords and PMR were observed after regular relaxation interventions over a short time span.
Correlated microtiming deviations in jazz and rock music.
Sogorski, Mathias; Geisel, Theo; Priesemann, Viola
2018-01-01
Musical rhythms performed by humans typically show temporal fluctuations. While they have been characterized in simple rhythmic tasks, it is an open question what is the nature of temporal fluctuations, when several musicians perform music jointly in all its natural complexity. To study such fluctuations in over 100 original jazz and rock/pop recordings played with and without metronome we developed a semi-automated workflow allowing the extraction of cymbal beat onsets with millisecond precision. Analyzing the inter-beat interval (IBI) time series revealed evidence for two long-range correlated processes characterized by power laws in the IBI power spectral densities. One process dominates on short timescales (t < 8 beats) and reflects microtiming variability in the generation of single beats. The other dominates on longer timescales and reflects slow tempo variations. Whereas the latter did not show differences between musical genres (jazz vs. rock/pop), the process on short timescales showed higher variability for jazz recordings, indicating that jazz makes stronger use of microtiming fluctuations within a measure than rock/pop. Our results elucidate principles of rhythmic performance and can inspire algorithms for artificial music generation. By studying microtiming fluctuations in original music recordings, we bridge the gap between minimalistic tapping paradigms and expressive rhythmic performances.
Dance choreography is coordinated with song repertoire in a complex avian display.
Dalziell, Anastasia H; Peters, Richard A; Cockburn, Andrew; Dorland, Alexandra D; Maisey, Alex C; Magrath, Robert D
2013-06-17
All human cultures have music and dance, and the two activities are so closely integrated that many languages use just one word to describe both. Recent research points to a deep cognitive connection between music and dance-like movements in humans, fueling speculation that music and dance have coevolved and prompting the need for studies of audiovisual displays in other animals. However, little is known about how nonhuman animals integrate acoustic and movement display components. One striking property of human displays is that performers coordinate dance with music by matching types of dance movements with types of music, as when dancers waltz to waltz music. Here, we show that a bird also temporally coordinates a repertoire of song types with a repertoire of dance-like movements. During displays, male superb lyrebirds (Menura novaehollandiae) sing four different song types, matching each with a unique set of movements and delivering song and dance types in a predictable sequence. Crucially, display movements are both unnecessary for the production of sound and voluntary, because males sometimes sing without dancing. Thus, the coordination of independently produced repertoires of acoustic and movement signals is not a uniquely human trait. Copyright © 2013 Elsevier Ltd. All rights reserved.
A young infant with musicogenic epilepsy.
Lin, Kuang-Lin; Wang, Huei-Shyong; Kao, Pan-Fu
2003-05-01
Musicogenic epilepsy is a relatively rare form of epilepsy. In its pure form, it is characterized by epileptic seizures that are provoked exclusively by listening to music. The usual type of seizure is partial complex or generalized tonic-clonic. Precipitating factors are quite specific, such as listening to only one composition or the actual playing of music on an instrument. However, simple sound also can be a trigger. We report a 6-month-old infant with musicogenic epilepsy. She manifested right-sided focal seizures with occasional generalization. The seizures were frequently triggered by loud music, especially that by the Beatles. The interictal electroencephalography results were normal. Ictal spikes were present throughout the left temporal area during continuous electroencephalograpic monitoring. Brain magnetic resonance imaging results were normal, whereas single-photon emission computed tomography of the brain revealed hypoperfusion of the left temporal area. The young age and epileptogenic left temporal lobe lesion in this patient with musicogenic epilepsy were unusual characteristics. Theoretically, three levels of integration are involved in music processing in the brain. The involved integration of this infant's brain may be the sensory level rather than the emotional level. Nevertheless, the personal musicality and musical style of the Beatles might play an important role in this patient's epilepsy.
Hierarchical processing in music, language, and action: Lashley revisited.
Fitch, W Tecumseh; Martins, Mauricio D
2014-05-01
Sixty years ago, Karl Lashley suggested that complex action sequences, from simple motor acts to language and music, are a fundamental but neglected aspect of neural function. Lashley demonstrated the inadequacy of then-standard models of associative chaining, positing a more flexible and generalized "syntax of action" necessary to encompass key aspects of language and music. He suggested that hierarchy in language and music builds upon a more basic sequential action system, and provided several concrete hypotheses about the nature of this system. Here, we review a diverse set of modern data concerning musical, linguistic, and other action processing, finding them largely consistent with an updated neuroanatomical version of Lashley's hypotheses. In particular, the lateral premotor cortex, including Broca's area, plays important roles in hierarchical processing in language, music, and at least some action sequences. Although the precise computational function of the lateral prefrontal regions in action syntax remains debated, Lashley's notion-that this cortical region implements a working-memory buffer or stack scannable by posterior and subcortical brain regions-is consistent with considerable experimental data. © 2014 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals Inc. on behalf of The New York Academy of Sciences.
Functional MRI of music emotion processing in frontotemporal dementia.
Agustus, Jennifer L; Mahoney, Colin J; Downey, Laura E; Omar, Rohani; Cohen, Miriam; White, Mark J; Scott, Sophie K; Mancini, Laura; Warren, Jason D
2015-03-01
Frontotemporal dementia is an important neurodegenerative disorder of younger life led by profound emotional and social dysfunction. Here we used fMRI to assess brain mechanisms of music emotion processing in a cohort of patients with frontotemporal dementia (n = 15) in relation to healthy age-matched individuals (n = 11). In a passive-listening paradigm, we manipulated levels of emotion processing in simple arpeggio chords (mode versus dissonance) and emotion modality (music versus human emotional vocalizations). A complex profile of disease-associated functional alterations was identified with separable signatures of musical mode, emotion level, and emotion modality within a common, distributed brain network, including posterior and anterior superior temporal and inferior frontal cortices and dorsal brainstem effector nuclei. Separable functional signatures were identified post-hoc in patients with and without abnormal craving for music (musicophilia): a model for specific abnormal emotional behaviors in frontotemporal dementia. Our findings indicate the potential of music to delineate neural mechanisms of altered emotion processing in dementias, with implications for future disease tracking and therapeutic strategies. © 2014 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals Inc. on behalf of The New York Academy of Sciences.
Sculpting 3D worlds with music: advanced texturing techniques
NASA Astrophysics Data System (ADS)
Greuel, Christian; Bolas, Mark T.; Bolas, Niko; McDowall, Ian E.
1996-04-01
Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.
Four principles of bio-musicology.
Fitch, W Tecumseh
2015-03-19
As a species-typical trait of Homo sapiens, musicality represents a cognitively complex and biologically grounded capacity worthy of intensive empirical investigation. Four principles are suggested here as prerequisites for a successful future discipline of bio-musicology. These involve adopting: (i) a multicomponent approach which recognizes that musicality is built upon a suite of interconnected capacities, of which none is primary; (ii) a pluralistic Tinbergian perspective that addresses and places equal weight on questions of mechanism, ontogeny, phylogeny and function; (iii) a comparative approach, which seeks and investigates animal homologues or analogues of specific components of musicality, wherever they can be found; and (iv) an ecologically motivated perspective, which recognizes the need to study widespread musical behaviours across a range of human cultures (and not focus solely on Western art music or skilled musicians). Given their pervasiveness, dance and music created for dancing should be considered central subcomponents of music, as should folk tunes, work songs, lullabies and children's songs. Although the precise breakdown of capacities required by the multicomponent approach remains open to debate, and different breakdowns may be appropriate to different purposes, I highlight four core components of human musicality--song, drumming, social synchronization and dance--as widespread and pervasive human abilities spanning across cultures, ages and levels of expertise. Each of these has interesting parallels in the animal kingdom (often analogies but in some cases apparent homologies also). Finally, I suggest that the search for universal capacities underlying human musicality, neglected for many years, should be renewed. The broad framework presented here illustrates the potential for a future discipline of bio-musicology as a rich field for interdisciplinary and comparative research.
Naturally Biased Associations Between Music and Poetry.
Albertazzi, Liliana; Canal, Luisa; Micciolo, Rocco; Ferrari, Fulvio; Sitta, Sebastiano; Hachen, Iacopo
2017-02-01
The study analyzes the existence of naturally biased associations in the general population between a series of musical selections and a series of quatrains. Differently from other studies in the field, the association is tested between complex stimuli involving literary texts, which increases the load of the semantic factors. The stimuli were eight quatrains taken from the same poem and eight musical clips taken from a classical musical version of the poem. The experiment was conducted in two phases. First, the participants were asked to rate 10 couples of opposite adjectives on a continuous bipolar scale when reading a quatrain or when listening to a musical clip; then they were asked to associate a given clip directly with the quatrains in decreasing order. The results showed the existence of significant associations between the semantics of the quatrains and the musical selections. They also confirmed the correspondences experienced by the composer when writing the musical version of the poem. Connotative dimensions such as rough or smooth, distressing or serene, turbid or clear, and gloomy or bright, characterizing both the semantic and the auditory stimuli, may have played a role in the associations. The results also shed light on the accomplishment of the two diverse methodologies adopted in the two different phases of the test. Finally, the role of specific musical components and their combinations is likely to have played an important role in the associations, an aspect that shall be addressed in further studies.
Enhanced timing abilities in percussionists generalize to rhythms without a musical beat.
Cameron, Daniel J; Grahn, Jessica A
2014-01-01
The ability to entrain movements to music is arguably universal, but it is unclear how specialized training may influence this. Previous research suggests that percussionists have superior temporal precision in perception and production tasks. Such superiority may be limited to temporal sequences that resemble real music or, alternatively, may generalize to musically implausible sequences. To test this, percussionists and nonpercussionists completed two tasks that used rhythmic sequences varying in musical plausibility. In the beat tapping task, participants tapped with the beat of a rhythmic sequence over 3 stages: finding the beat (as an initial sequence played), continuation of the beat (as a second sequence was introduced and played simultaneously), and switching to a second beat (the initial sequence finished, leaving only the second). The meters of the two sequences were either congruent or incongruent, as were their tempi (minimum inter-onset intervals). In the rhythm reproduction task, participants reproduced rhythms of four types, ranging from high to low musical plausibility: Metric simple rhythms induced a strong sense of the beat, metric complex rhythms induced a weaker sense of the beat, nonmetric rhythms had no beat, and jittered nonmetric rhythms also had no beat as well as low temporal predictability. For both tasks, percussionists performed more accurately than nonpercussionists. In addition, both groups were better with musically plausible than implausible conditions. Overall, the percussionists' superior abilities to entrain to, and reproduce, rhythms generalized to musically implausible sequences.
Borelli, Paolo; Vedovello, Marcella; Braga, Massimiliano; Pederzoli, Massimo; Beretta, Sandro
2016-12-01
Musical hallucination is a disorder of complex sound processing of instrumental music, songs, choirs, chants, etc. The underlying pathologies include moderate to severe acquired hearing loss (the auditory equivalent of Charles Bonnet syndrome), psychiatric illnesses (depression, schizophrenia), drug intoxication (benzodiazepines, salicylate, pentoxifylline, propranolol), traumatic lesions along the acoustic pathways, and epilepsy. The hallucinations are most likely to begin late in life; 70% of patients are women. Musical hallucination has no known specific therapy. Treating the underlying cause is the most effective approach; neuroleptic and antidepressant medications have only rarely succeeded.Musical hallucination in epilepsy typically presents as simple partial seizures originating in the lateral temporal cortex. To our knowledge, no formal report of musical hallucination in the interictal state has been published before. In contrast, other interictal psychotic features are a relatively common complication, especially in patients with long-standing drug-resistant epilepsy.We describe a 62-year-old woman with a long history of mesial temporal lobe epilepsy whose musical hallucination was solely interictal. We speculate on the possible link between temporal epilepsy and her hallucination. We hypothesize that, as a result of her epileptic activity-induced damage, an imbalance developed between the excitatory and inhibitory projections connecting the mesial temporal cortex to the other auditory structures. These structures may have generated hyperactivity in the lateral temporal cortex through a "release" mechanism that eventually resulted in musical hallucination.
Sembajwe, Grace; Tveito, Torill Helene; Hopcia, Karen; Kenwood, Christopher; O'Day, Elizabeth Tucker; Stoddard, Anne M; Dennerlein, Jack T; Hashimoto, Dean; Sorensen, Glorian
2013-03-01
The aim of this study was to assess the relationship between psychosocial factors at work and multi-site musculoskeletal pain among patient care workers. In a survey of 1,572 workers from two hospitals, occupational psychosocial factors and health outcomes of workers with single and multi-site pain were evaluated using items from the Job Content Questionnaire that was designed to measure psychological demands, decision latitude, and social support. An adapted Nordic Questionnaire provided data on the musculoskeletal pain outcome. Covariates included body mass index, age, gender, and occupation. The analyses revealed statistically significant associations between psychosocial demands and multi-site musculoskeletal pain among patient care associates, nurses, and administrative personnel, both men and women. Supervisor support played a significant role for nurses and women. These results remained statistically significant after adjusting for covariates. These results highlight the associations between workplace psychosocial strain and multi-site musculoskeletal pain, setting the stage for future longitudinal explorations. Copyright 2013, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
Nelson, Geoffrey; Macnaughton, Eric; Goering, Paula; Dudley, Michael; O'Campo, Patricia; Patterson, Michelle; Piat, Myra; Prévost, Natasha; Strehlau, Verena; Vallée, Catherine
2013-06-01
This research focused on the relationships between a national team and five project sites across Canada in planning a complex, community intervention for homeless people with mental illness called At Home/Chez Soi, which is based on the Housing First model. The research addressed two questions: (a) what are the challenges in planning? and (b) what factors that helped or hindered moving project planning forward? Using qualitative methods, 149 national, provincial, and local stakeholders participated in key informant or focus group interviews. We found that planning entails not only intervention and research tasks, but also relational processes that occur within an ecology of time, local context, and values. More specifically, the relationships between the national team and the project sites can be conceptualized as a collaborative process in which national and local partners bring different agendas to the planning process and must therefore listen to, negotiate, discuss, and compromise with one another. A collaborative process that involves power-sharing and having project coordinators at each site helped to bridge the differences between these two stakeholder groups, to find common ground, and to accomplish planning tasks within a compressed time frame. While local context and culture pushed towards unique adaptations of Housing First, the principles of the Housing First model provided a foundation for a common approach across sites and interventions. The implications of the findings for future planning and research of multi-site, complex, community interventions are noted.
Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; Macpherson, Megan; Weber-Fox, Christine
2013-04-01
Using electrophysiology, we have examined two questions in relation to musical training - namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 event-related potential component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We therefore conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians' accuracy tended to suffer less from the change in timbre of the sounds, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. © 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Cross-modal associations between materic painting and classical Spanish music
Albertazzi, Liliana; Canal, Luisa; Micciolo, Rocco
2015-01-01
The study analyses the existence of cross-modal associations in the general population between a series of paintings and a series of clips of classical (guitar) music. Because of the complexity of the stimuli, the study differs from previous analyses conducted on the association between visual and auditory stimuli, which predominantly analyzed single tones and colors by means of psychophysical methods and forced choice responses. More recently, the relation between music and shape has been analyzed in terms of music visualization, or relatively to the role played by emotion in the association, and free response paradigms have also been accepted. In our study, in order to investigate what attributes may be responsible for the phenomenon of the association between visual and auditory stimuli, the clip/painting association was tested in two experiments: the first used the semantic differential on a unidimensional rating scale of adjectives; the second employed a specific methodology based on subjective perceptual judgments in first person account. Because of the complexity of the stimuli, it was decided to have the maximum possible uniformity of style, composition and musical color. The results show that multisensory features expressed by adjectives such as “quick,” “agitated,” and “strong,” and their antonyms “slow,” “calm,” and “weak” characterized both the visual and auditory stimuli, and that they may have had a role in the associations. The results also suggest that the main perceptual features responsible for the clip/painting associations were hue, lightness, timbre, and musical tempo. Contrary to what was expected, the musical mode usually related to feelings of happiness (major mode), or to feelings of sadness (minor mode), and spatial orientation (vertical and horizontal) did not play a significant role in the association. The consistency of the associations was shown when evaluated on the whole sample, and after considering the different backgrounds and expertise of the subjects. No substantial difference was found between expert and non-expert subjects. The methods used in the experiment (semantic differential and subjective judgements in first person account) corroborated the interpretation of the results as associations due to patterns of qualitative similarity present in stimuli of different sensory modalities and experienced as such by the subjects. The main result of the study consists in showing the existence of cross-modal associations between highly complex stimuli; furthermore, the second experiment employed a specific methodology based on subjective perceptual judgments. PMID:25954217
O'Hara, Ruth; Cassidy-Eagle, Erin L; Beaudreau, Sherry A; Eyler, Lisa T; Gray, Heather L; Giese-Davis, Janine; Hubbard, Jeffrey; Yesavage, Jerome A
2010-01-01
This report highlights the use of multisite training for psychiatry and psychology postdoctoral fellows developing careers in academic clinical research in the field of mental health. The objective is to describe a model of training for young investigators to establish independent academic clinical research careers, including (1) program structure and eligibility, (2) program goals and development of a multisite curriculum, (3) use of technology for implementing the program across multiple sites, and (4) advantages and challenges of this multisite approach. In 2000, in collaboration with the Veterans Affairs (VA) Mental Illness Research, Education and Clinical Centers (MIRECCs), the VA Office of Academic Affiliations launched the Special Fellowship Program in Advanced Psychiatry and Psychology. Each of the 10 currently participating VA sites across the United States is affiliated with a MIRECC and an academic medical institution. In the first five years of this fellowship program, 83 fellows (34 psychiatrists and 49 psychologists) have participated. The success of this multisite approach is evidenced by the 58 fellows who have already graduated from the program: 70% have entered academic clinical research positions, and over 25 have obtained independent extramural grant support from the VA or the National Institutes of Health. Multisite training results in a greater transfer of knowledge and capitalizes on the nationwide availability of experts, creating unique networking and learning opportunities for trainees. The VA's multisite fellowship program plays a valuable role in preparing substantial numbers of psychiatry and psychology trainees for a range of academic clinical research and leadership positions in the field of mental health.
Generaal, Ellen; Vogelzangs, Nicole; Macfarlane, Gary J; Geenen, Rinie; Smit, Johannes H; Penninx, Brenda W J H; Dekker, Joost
2014-07-09
Studies on hypothalamic-pituitary-adrenal axis (HPA-axis) function amongst patients with chronic pain show equivocal results and well-controlled cohort studies are rare in this field. The goal of our study was to examine whether HPA-axis dysfunction is associated with the presence and the severity of chronic multi-site musculoskeletal pain. Data are from the Netherlands Study of Depression and Anxiety including 1125 subjects with and without lifetime depressive and anxiety disorders. The Chronic Pain Grade questionnaire was used to determine the presence and severity of chronic multi-site musculoskeletal pain. Subjects were categorized into a chronic multi-site musculoskeletal pain group (n = 471) and a control group (n = 654). Salivary cortisol samples were collected to assess HPA-axis function (awakening level, 1-h awakening response, evening level, diurnal slope and post-dexamethasone level). In comparison with the control group, subjects with chronic multi-site musculoskeletal pain showed significantly lower cortisol level at awakening, lower evening level and a blunted diurnal slope. Lower cortisol level at awakening and a blunted diurnal slope appeared to be restricted to those without depressive and/or anxiety disorders, who also showed a lower 1-h awakening response. Our results suggest hypocortisolemia in chronic multi-site musculoskeletal pain. However, if chronic pain is accompanied by a depressive or anxiety disorder, typically related to hypercortisolemia, the association between cortisol levels and chronic multi-site musculoskeletal pain appears to be partly masked. Future studies should take psychopathology into account when examining HPA-axis function in chronic pain.
2014-01-01
Background Studies on hypothalamic-pituitary-adrenal axis (HPA-axis) function amongst patients with chronic pain show equivocal results and well-controlled cohort studies are rare in this field. The goal of our study was to examine whether HPA-axis dysfunction is associated with the presence and the severity of chronic multi-site musculoskeletal pain. Methods Data are from the Netherlands Study of Depression and Anxiety including 1125 subjects with and without lifetime depressive and anxiety disorders. The Chronic Pain Grade questionnaire was used to determine the presence and severity of chronic multi-site musculoskeletal pain. Subjects were categorized into a chronic multi-site musculoskeletal pain group (n = 471) and a control group (n = 654). Salivary cortisol samples were collected to assess HPA-axis function (awakening level, 1-h awakening response, evening level, diurnal slope and post-dexamethasone level). Results In comparison with the control group, subjects with chronic multi-site musculoskeletal pain showed significantly lower cortisol level at awakening, lower evening level and a blunted diurnal slope. Lower cortisol level at awakening and a blunted diurnal slope appeared to be restricted to those without depressive and/or anxiety disorders, who also showed a lower 1-h awakening response. Conclusions Our results suggest hypocortisolemia in chronic multi-site musculoskeletal pain. However, if chronic pain is accompanied by a depressive or anxiety disorder, typically related to hypercortisolemia, the association between cortisol levels and chronic multi-site musculoskeletal pain appears to be partly masked. Future studies should take psychopathology into account when examining HPA-axis function in chronic pain. PMID:25007969
Rahbar, Mohammad H.; Wyatt, Gwen; Sikorskii, Alla; Victorson, David; Ardjomand-Hessabi, Manouchehr
2011-01-01
Background Multisite randomized clinical trials allow for increased research collaboration among investigators and expedite data collection efforts. As a result, government funding agencies typically look favorably upon this approach. As the field of complementary and alternative medicine (CAM) continues to evolve, so do increased calls for the use of more rigorous study design and trial methodologies, which can present challenges for investigators. Purpose To describe the processes involved in the coordination and management of a multisite randomized clinical trial of a CAM intervention. Methods Key aspects related to the coordination and management of a multisite CAM randomized clinical trial are presented, including organizational and site selection considerations, recruitment concerns and issues related to data collection and randomization to treatment groups. Management and monitoring of data, as well as quality assurance procedures are described. Finally, a real world perspective is shared from a recently conducted multisite randomized clinical trial of reflexology for women diagnosed with advanced breast cancer. Results The use of multiple sites in the conduct of CAM-based randomized clinical trials can provide an efficient, collaborative and robust approach to study coordination and data collection that maximizes efficiency and ensures the quality of results. Conclusions Multisite randomized clinical trial designs can offer the field of CAM research a more standardized and efficient approach to examine the effectiveness of novel therapies and treatments. Special attention must be given to intervention fidelity, consistent data collection and ensuring data quality. Assessment and reporting of quantitative indicators of data quality should be required. PMID:21664296
Different amplitude and time distribution of the sound of light and classical music
NASA Astrophysics Data System (ADS)
Diodati, P.; Piazza, S.
2000-08-01
Several pieces of different musical kinds were studied measuring $N(A)$, the output amplitude of a peak detector driven by the electric signal arriving to the loudspeaker. Fixed a suitable threshold $\\bar{A}$, we considered $N(A)$, the number of times that $A(t)>\\bar{A}$, each of them we named event and $N(t)$, the distribution of times $t$ between two consecutive events. Some $N(A)$ and $N(t)$ distributions are displayed in the reported logarithmic plots, showing that jazz, pop, rock and other popular rhythms have noise-distribution, while classical pieces of music are characterized by more complex statistics. We pointed out the extraordinary case of the aria ``\\textit{La calunnia \\`{e} un venticello}'', where the words describe an avalanche or seismic process, calumny, and the rossinian music shows $N(A)$ and $N(t)$ distribution typical of earthquakes.
Organists and organ music composers.
Foerch, Christian; Hennerici, Michael G
2015-01-01
Clinical case reports of patients with exceptional musical talent and education provide clues as to how the brain processes musical ability and aptitude. In this chapter, selected examples from famous and unknown organ players/composers are presented to demonstrate the complexity of modified musical performances as well as the capacities of the brain to preserve artistic abilities: both authors are active organists and academic neurologists with strong clinical experience, practice, and knowledge about the challenges to play such an outstanding instrument and share their interest to explore potentially instrument-related phenomena of brain modulation in specific transient or permanent impairments. We concentrate on the sites of lesions, suggested pathophysiology, separate positive (e.g., seizures, visual or auditory hallucinations, or synesthesia [an involuntary perception produced by stimulation of another sense]) and negative phenomena (e.g., amusia, aphasia, neglect, or sensory-motor deficits) and particularly address aspects of recent concepts of temporary and permanent network disorders. © 2015 Elsevier B.V. All rights reserved.
Pitch Perception in the First Year of Life, a Comparison of Lexical Tones and Musical Pitch.
Chen, Ao; Stevens, Catherine J; Kager, René
2017-01-01
Pitch variation is pervasive in speech, regardless of the language to which infants are exposed. Lexical tone is influenced by general sensitivity to pitch. We examined whether the development in lexical tone perception may develop in parallel with perception of pitch in other cognitive domains namely music. Using a visual fixation paradigm, 100 and one 4- and 12-month-old Dutch infants were tested on their discrimination of Chinese rising and dipping lexical tones as well as comparable three-note musical pitch contours. The 4-month-old infants failed to show a discrimination effect in either condition, whereas the 12-month-old infants succeeded in both conditions. These results suggest that lexical tone perception may reflect and relate to general pitch perception abilities, which may serve as a basis for developing more complex language and musical skills.
Bashwiner, David M.; Wertz, Christopher J.; Flores, Ranee A.; Jung, Rex E.
2016-01-01
Creative behaviors are among the most complex that humans engage in, involving not only highly intricate, domain-specific knowledge and skill, but also domain-general processing styles and the affective drive to create. This study presents structural imaging data indicating that musically creative people (as indicated by self-report) have greater cortical surface area or volume in a) regions associated with domain-specific higher-cognitive motor activity and sound processing (dorsal premotor cortex, supplementary and pre-supplementary motor areas, and planum temporale), b) domain-general creative-ideation regions associated with the default mode network (dorsomedial prefrontal cortex, middle temporal gyrus, and temporal pole), and c) emotion-related regions (orbitofrontal cortex, temporal pole, and amygdala). These findings suggest that domain-specific musical expertise, default-mode cognitive processing style, and intensity of emotional experience might all coordinate to motivate and facilitate the drive to create music. PMID:26888383
Experimenting with musical intervals
NASA Astrophysics Data System (ADS)
Lo Presto, Michael C.
2003-07-01
When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.
Multi-site precipitation downscaling using a stochastic weather generator
NASA Astrophysics Data System (ADS)
Chen, Jie; Chen, Hua; Guo, Shenglian
2018-03-01
Statistical downscaling is an efficient way to solve the spatiotemporal mismatch between climate model outputs and the data requirements of hydrological models. However, the most commonly-used downscaling method only produces climate change scenarios for a specific site or watershed average, which is unable to drive distributed hydrological models to study the spatial variability of climate change impacts. By coupling a single-site downscaling method and a multi-site weather generator, this study proposes a multi-site downscaling approach for hydrological climate change impact studies. Multi-site downscaling is done in two stages. The first stage involves spatially downscaling climate model-simulated monthly precipitation from grid scale to a specific site using a quantile mapping method, and the second stage involves the temporal disaggregating of monthly precipitation to daily values by adjusting the parameters of a multi-site weather generator. The inter-station correlation is specifically considered using a distribution-free approach along with an iterative algorithm. The performance of the downscaling approach is illustrated using a 10-station watershed as an example. The precipitation time series derived from the National Centers for Environment Prediction (NCEP) reanalysis dataset is used as the climate model simulation. The precipitation time series of each station is divided into 30 odd years for calibration and 29 even years for validation. Several metrics, including the frequencies of wet and dry spells and statistics of the daily, monthly and annual precipitation are used as criteria to evaluate the multi-site downscaling approach. The results show that the frequencies of wet and dry spells are well reproduced for all stations. In addition, the multi-site downscaling approach performs well with respect to reproducing precipitation statistics, especially at monthly and annual timescales. The remaining biases mainly result from the non-stationarity of NCEP precipitation. Overall, the proposed approach is efficient for generating multi-site climate change scenarios that can be used to investigate the spatial variability of climate change impacts on hydrology.
2013-01-01
Background Daily pain and multi-site pain are both associated with reduction in work ability and health-related quality of life (HRQoL) among adults. However, no population-based studies have yet investigated the prevalence of daily and multi-site pain among adolescents and how these are associated with respondent characteristics. The purpose of this study was to investigate the prevalence of self-reported daily and multi-site pain among adolescents aged 12–19 years and associations of almost daily pain and multi-site pain with respondent characteristics (sex, age, body mass index, HRQoL and sports participation). Methods A population-based cross-sectional study was conducted among 4,007 adolescents aged 12–19 years in Denmark. Adolescents answered an online questionnaire during physical education lessons. The questionnaire contained a mannequin divided into 12 regions on which the respondents indicated their current pain sites and pain frequency (rarely, monthly, weekly, more than once per week, almost daily pain), characteristics, sports participation and HRQoL measured by the EuroQoL 5D. Multivariate regression was used to calculate the odds ratio for the association between almost daily pain, multi-site pain and respondent characteristics. Results The response rate was 73.7%. A total of 2,953 adolescents (62% females) answered the questionnaire. 33.3% reported multi-site pain (pain in >1 region) while 19.8% reported almost daily pain. 61% reported current pain in at least one region with knee and back pain being the most common sites. Female sex (OR: 1.35-1.44) and a high level of sports participation (OR: 1.51-2.09) were associated with increased odds of having almost daily pain and multi-site pain. Better EQ-5D score was associated with decreased odds of having almost daily pain or multi-site pain (OR: 0.92-0.94). Conclusion In this population-based cohort of school-attending Danish adolescents, nearly two out of three reported current pain and, on average, one out of three reported pain in more than one body region. Female sex, and high level of sports participation were associated with increased odds of having almost daily pain and multi-site pain. The study highlights an important health issue that calls for investigations to improve our understanding of adolescent pain and our capacity to prevent and treat this condition. PMID:24252440
[Music therapy as a part of complex healing].
Sliwka, Agnieszka; Jarosz, Anna; Nowobilski, Roman
2006-10-01
Music therapy is a method which takes the adventage of therapeutic influence of musie on psychological and somatic sphere of the human body. Its therapeutic properties are more and more used. Current scientific research have proved its modifying influence on vegetative, circulatory, respiratory and endocrine systems. Works devoted to the effects of musie on the patients' psychological sphere have also confirmed that it reduces psychopathologic symptoms (anxiety and depression), improves self-rating, influences quality and disorders of sleep, reduces pain, improves moral immunity and patients' openness, readiness, co-operation in treatment process. Music therapy is treated as a method which complements conventional treatment and makes up part of an integral whole together with physiotherapy, kinesitherapy and recuperation.
What does music express? Basic emotions and beyond
Juslin, Patrik N.
2013-01-01
Numerous studies have investigated whether music can reliably convey emotions to listeners, and—if so—what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of “multiple layers” of musical expression of emotions. The “core” layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this “core” layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions—though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions. PMID:24046758
Autism, emotion recognition and the mirror neuron system: the case of music.
Molnar-Szakacs, Istvan; Wang, Martha J; Laugeson, Elizabeth A; Overy, Katie; Wu, Wai-Ling; Piggot, Judith
2009-11-16
Understanding emotions is fundamental to our ability to navigate and thrive in a complex world of human social interaction. Individuals with Autism Spectrum Disorders (ASD) are known to experience difficulties with the communication and understanding of emotion, such as the nonverbal expression of emotion and the interpretation of emotions of others from facial expressions and body language. These deficits often lead to loneliness and isolation from peers, and social withdrawal from the environment in general. In the case of music however, there is evidence to suggest that individuals with ASD do not have difficulties recognizing simple emotions. In addition, individuals with ASD have been found to show normal and even superior abilities with specific aspects of music processing, and often show strong preferences towards music. It is possible these varying abilities with different types of expressive communication may be related to a neural system referred to as the mirror neuron system (MNS), which has been proposed as deficient in individuals with autism. Music's power to stimulate emotions and intensify our social experiences might activate the MNS in individuals with ASD, and thus provide a neural foundation for music as an effective therapeutic tool. In this review, we present literature on the ontogeny of emotion processing in typical development and in individuals with ASD, with a focus on the case of music.
Data-driven analysis of functional brain interactions during free listening to music and speech.
Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming
2015-06-01
Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.
Musical rhythm and reading development: does beat processing matter?
Ozernov-Palchik, Ola; Patel, Aniruddh D
2018-05-20
There is mounting evidence for links between musical rhythm processing and reading-related cognitive skills, such as phonological awareness. This may be because music and speech are rhythmic: both involve processing complex sound sequences with systematic patterns of timing, accent, and grouping. Yet, there is a salient difference between musical and speech rhythm: musical rhythm is often beat-based (based on an underlying grid of equal time intervals), while speech rhythm is not. Thus, the role of beat-based processing in the reading-rhythm relationship is not clear. Is there is a distinct relation between beat-based processing mechanisms and reading-related language skills, or is the rhythm-reading link entirely due to shared mechanisms for processing nonbeat-based aspects of temporal structure? We discuss recent evidence for a distinct link between beat-based processing and early reading abilities in young children, and suggest experimental designs that would allow one to further methodically investigate this relationship. We propose that beat-based processing taps into a listener's ability to use rich contextual regularities to form predictions, a skill important for reading development. © 2018 New York Academy of Sciences.
The Year in Cognitive Neuroscience
Fitch, W Tecumseh; Martins, Mauricio D
2014-01-01
Sixty years ago, Karl Lashley suggested that complex action sequences, from simple motor acts to language and music, are a fundamental but neglected aspect of neural function. Lashley demonstrated the inadequacy of then-standard models of associative chaining, positing a more flexible and generalized “syntax of action” necessary to encompass key aspects of language and music. He suggested that hierarchy in language and music builds upon a more basic sequential action system, and provided several concrete hypotheses about the nature of this system. Here, we review a diverse set of modern data concerning musical, linguistic, and other action processing, finding them largely consistent with an updated neuroanatomical version of Lashley's hypotheses. In particular, the lateral premotor cortex, including Broca's area, plays important roles in hierarchical processing in language, music, and at least some action sequences. Although the precise computational function of the lateral prefrontal regions in action syntax remains debated, Lashley's notion—that this cortical region implements a working-memory buffer or stack scannable by posterior and subcortical brain regions—is consistent with considerable experimental data. PMID:24697242
The use of music therapy to address the suffering in advanced cancer pain.
Magill, L
2001-01-01
Pain associated with advanced cancer is multifaceted and complex, and is influenced by physiological, psychological, social, and spiritual phenomena. Suffering may be identified in patients when pain is associated with impending loss, increased dependency, and an altered understanding of one's existential purpose. Comprehensive pain management aims to address problematic symptoms in order to improve comfort, peace of mind, and quality of life. Music therapy is a treatment modality of great diversity that can offer a range of benefits to patients with advanced cancer pain and symptoms of suffering. Music therapists perform comprehensive assessments that include reviews of social, cultural, and medical history; current medical status; and the ways in which emotions are affecting the pain. A variety of music therapy techniques may be used, including vocal techniques, listening, and instrumental techniques. These techniques provide opportunities for exploration of the feelings and issues compounding the pain experience. Case examples are presented to demonstrate the "lifting", "transporting", and "bringing of peace" qualities of music that offer patients moments of release, reflection, and renewal.
Improvisation and the self-organization of multiple musical bodies.
Walton, Ashley E; Richardson, Michael J; Langland-Hassan, Peter; Chemero, Anthony
2015-01-01
Understanding everyday behavior relies heavily upon understanding our ability to improvise, how we are able to continuously anticipate and adapt in order to coordinate with our environment and others. Here we consider the ability of musicians to improvise, where they must spontaneously coordinate their actions with co-performers in order to produce novel musical expressions. Investigations of this behavior have traditionally focused on describing the organization of cognitive structures. The focus, here, however, is on the ability of the time-evolving patterns of inter-musician movement coordination as revealed by the mathematical tools of complex dynamical systems to provide a new understanding of what potentiates the novelty of spontaneous musical action. We demonstrate this approach through the application of cross wavelet spectral analysis, which isolates the strength and patterning of the behavioral coordination that occurs between improvising musicians across a range of nested time-scales. Revealing the sophistication of the previously unexplored dynamics of movement coordination between improvising musicians is an important step toward understanding how creative musical expressions emerge from the spontaneous coordination of multiple musical bodies.
Jazz drummers recruit language-specific areas for the processing of rhythmic structure.
Herdener, Marcus; Humbel, Thierry; Esposito, Fabrizio; Habermeyer, Benedikt; Cattapan-Ludewig, Katja; Seifritz, Erich
2014-03-01
Rhythm is a central characteristic of music and speech, the most important domains of human communication using acoustic signals. Here, we investigated how rhythmical patterns in music are processed in the human brain, and, in addition, evaluated the impact of musical training on rhythm processing. Using fMRI, we found that deviations from a rule-based regular rhythmic structure activated the left planum temporale together with Broca's area and its right-hemispheric homolog across subjects, that is, a network also crucially involved in the processing of harmonic structure in music and the syntactic analysis of language. Comparing the BOLD responses to rhythmic variations between professional jazz drummers and musical laypersons, we found that only highly trained rhythmic experts show additional activity in left-hemispheric supramarginal gyrus, a higher-order region involved in processing of linguistic syntax. This suggests an additional functional recruitment of brain areas usually dedicated to complex linguistic syntax processing for the analysis of rhythmical patterns only in professional jazz drummers, who are especially trained to use rhythmical cues for communication.
Improvisation and the self-organization of multiple musical bodies
Walton, Ashley E.; Richardson, Michael J.; Langland-Hassan, Peter; Chemero, Anthony
2015-01-01
Understanding everyday behavior relies heavily upon understanding our ability to improvise, how we are able to continuously anticipate and adapt in order to coordinate with our environment and others. Here we consider the ability of musicians to improvise, where they must spontaneously coordinate their actions with co-performers in order to produce novel musical expressions. Investigations of this behavior have traditionally focused on describing the organization of cognitive structures. The focus, here, however, is on the ability of the time-evolving patterns of inter-musician movement coordination as revealed by the mathematical tools of complex dynamical systems to provide a new understanding of what potentiates the novelty of spontaneous musical action. We demonstrate this approach through the application of cross wavelet spectral analysis, which isolates the strength and patterning of the behavioral coordination that occurs between improvising musicians across a range of nested time-scales. Revealing the sophistication of the previously unexplored dynamics of movement coordination between improvising musicians is an important step toward understanding how creative musical expressions emerge from the spontaneous coordination of multiple musical bodies. PMID:25941499
Ethics Review for a Multi-Site Project Involving Tribal Nations in the Northern Plains.
Angal, Jyoti; Petersen, Julie M; Tobacco, Deborah; Elliott, Amy J
2016-04-01
Increasingly, Tribal Nations are forming ethics review panels, which function separately from institutional review boards (IRBs). The emergence of strong community representation coincides with a widespread effort supported by the U.S. Department of Health & Human Services and other federal agencies to establish a single IRB for all multi-site research. This article underscores the value of a tribal ethics review board and describes the tribal oversight for the Safe Passage Study-a multi-site, community-based project in the Northern Plains. Our experience demonstrates the benefits of tribal ethics review and makes a strong argument for including tribal oversight in future regulatory guidance for multi-site, community-based research. © The Author(s) 2016.
Hunting for the beat in the body: on period and phase locking in music-induced movement.
Burger, Birgitta; Thompson, Marc R; Luck, Geoff; Saarikallio, Suvi H; Toiviainen, Petri
2014-01-01
Music has the capacity to induce movement in humans. Such responses during music listening are usually spontaneous and range from tapping to full-body dancing. However, it is still unclear how humans embody musical structures to facilitate entrainment. This paper describes two experiments, one dealing with period locking to different metrical levels in full-body movement and its relationships to beat- and rhythm-related musical characteristics, and the other dealing with phase locking in the more constrained condition of sideways swaying motions. Expected in Experiment 1 was that music with clear and strong beat structures would facilitate more period-locked movement. Experiment 2 was assumed to yield a common phase relationship between participants' swaying movements and the musical beat. In both experiments optical motion capture was used to record participants' movements. In Experiment 1 a window-based period-locking probability index related to four metrical levels was established, based on acceleration data in three dimensions. Subsequent correlations between this index and musical characteristics of the stimuli revealed pulse clarity to be related to periodic movement at the tactus level, and low frequency flux to mediolateral and anteroposterior movement at both tactus and bar levels. At faster tempi higher metrical levels became more apparent in participants' movement. Experiment 2 showed that about half of the participants showed a stable phase relationship between movement and beat, with superior-inferior movement most often being synchronized to the tactus level, whereas mediolateral movement was rather synchronized to the bar level. However, the relationship between movement phase and beat locations was not consistent between participants, as the beat locations occurred at different phase angles of their movements. The results imply that entrainment to music is a complex phenomenon, involving the whole body and occurring at different metrical levels.
Dynamic Reconfiguration of the Supplementary Motor Area Network during Imagined Music Performance
Tanaka, Shoji; Kirino, Eiji
2017-01-01
The supplementary motor area (SMA) has been shown to be the center for motor planning and is active during music listening and performance. However, limited data exist on the role of the SMA in music. Music performance requires complex information processing in auditory, visual, spatial, emotional, and motor domains, and this information is integrated for the performance. We hypothesized that the SMA is engaged in multimodal integration of information, distributed across several regions of the brain to prepare for ongoing music performance. To test this hypothesis, functional networks involving the SMA were extracted from functional magnetic resonance imaging (fMRI) data that were acquired from musicians during imagined music performance and during the resting state. Compared with the resting condition, imagined music performance increased connectivity of the SMA with widespread regions in the brain including the sensorimotor cortices, parietal cortex, posterior temporal cortex, occipital cortex, and inferior and dorsolateral prefrontal cortex. Increased connectivity of the SMA with the dorsolateral prefrontal cortex suggests that the SMA is under cognitive control, while increased connectivity with the inferior prefrontal cortex suggests the involvement of syntax processing. Increased connectivity with the parietal cortex, posterior temporal cortex, and occipital cortex is likely for the integration of spatial, emotional, and visual information. Finally, increased connectivity with the sensorimotor cortices was potentially involved with the translation of thought planning into motor programs. Therefore, the reconfiguration of the SMA network observed in this study is considered to reflect the multimodal integration required for imagined and actual music performance. We propose that the SMA network construct “the internal representation of music performance” by integrating multimodal information required for the performance. PMID:29311870
Does Music Training Enhance Literacy Skills? A Meta-Analysis
Gordon, Reyna L.; Fehd, Hilda M.; McCandliss, Bruce D.
2015-01-01
Children's engagement in music practice is associated with enhancements in literacy-related language skills, as demonstrated by multiple reports of correlation across these two domains. Training studies have tested whether engaging in music training directly transfers benefit to children's literacy skill development. Results of such studies, however, are mixed. Interpretation of these mixed results is made more complex by the fact that a wide range of literacy-related outcome measures are used across these studies. Here, we address these challenges via a meta-analytic approach. A comprehensive literature review of peer-reviewed music training studies was built around key criteria needed to test the direct transfer hypothesis, including: (a) inclusion of music training vs. control groups; (b) inclusion of pre- vs. post-comparison measures, and (c) indication that reading instruction was held constant across groups. Thirteen studies were identified (n = 901). Two classes of outcome measures emerged with sufficient overlap to support meta-analysis: phonological awareness and reading fluency. Hours of training, age, and type of control intervention were examined as potential moderators. Results supported the hypothesis that music training leads to gains in phonological awareness skills. The effect isolated by contrasting gains in music training vs. gains in control was small relative to the large variance in these skills (d = 0.2). Interestingly, analyses revealed that transfer effects for rhyming skills tended to grow stronger with increased hours of training. In contrast, no significant aggregate transfer effect emerged for reading fluency measures, despite some studies reporting large training effects. The potential influence of other study design factors were considered, including intervention design, IQ, and SES. Results are discussed in the context of emerging findings that music training may enhance literacy development via changes in brain mechanisms that support both music and language cognition. PMID:26648880
Tuning Features of Chinese Folk Song Singing: A Case Study of Hua'er Music.
Yang, Yang; Welch, Graham; Sundberg, Johan; Himonides, Evangelos
2015-07-01
The learning and teaching of different singing styles, such as operatic and Chinese folk singing, was often found to be very challenging in professional music education because of the complexity of varied musical properties and vocalizations. By studying the acoustical and musical parameters of the singing voice, this study identified distinctive tuning characteristics of a particular folk music in China-Hua'er music-to inform the ineffective folk singing practices, which were hampered by the neglect of inherent tuning issues in music. Thirteen unaccompanied folk song examples from four folk singers were digitally audio recorded in a sound studio. Using an analyzing toolkit consisting of Praat, PeakFit, and MS Excel, the fundamental frequencies (F0) of these song examples were extracted into sets of "anchor pitches" mostly used, which were further divided into 253 F0 clusters. The interval structures of anchor pitches within each song were analyzed and then compared across 13 examples providing parameters that indicate the tuning preference of this particular singing style. The data analyses demonstrated that all singers used a tuning pattern consisting of five major anchor pitches suggesting a nonequal-tempered bias in singing. This partly verified the pentatonic scale proposed in previous empirical research but also argued a potential misunderstanding of the studied folk music scale that failed to take intrinsic tuning issues into consideration. This study suggests that, in professional music training, any tuning strategy should be considered in terms of the reference pitch and likely tuning systems. Any accompanying instruments would need to be tuned to match the underlying tuning bias. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Does Music Training Enhance Literacy Skills? A Meta-Analysis.
Gordon, Reyna L; Fehd, Hilda M; McCandliss, Bruce D
2015-01-01
Children's engagement in music practice is associated with enhancements in literacy-related language skills, as demonstrated by multiple reports of correlation across these two domains. Training studies have tested whether engaging in music training directly transfers benefit to children's literacy skill development. Results of such studies, however, are mixed. Interpretation of these mixed results is made more complex by the fact that a wide range of literacy-related outcome measures are used across these studies. Here, we address these challenges via a meta-analytic approach. A comprehensive literature review of peer-reviewed music training studies was built around key criteria needed to test the direct transfer hypothesis, including: (a) inclusion of music training vs. control groups; (b) inclusion of pre- vs. post-comparison measures, and (c) indication that reading instruction was held constant across groups. Thirteen studies were identified (n = 901). Two classes of outcome measures emerged with sufficient overlap to support meta-analysis: phonological awareness and reading fluency. Hours of training, age, and type of control intervention were examined as potential moderators. Results supported the hypothesis that music training leads to gains in phonological awareness skills. The effect isolated by contrasting gains in music training vs. gains in control was small relative to the large variance in these skills (d = 0.2). Interestingly, analyses revealed that transfer effects for rhyming skills tended to grow stronger with increased hours of training. In contrast, no significant aggregate transfer effect emerged for reading fluency measures, despite some studies reporting large training effects. The potential influence of other study design factors were considered, including intervention design, IQ, and SES. Results are discussed in the context of emerging findings that music training may enhance literacy development via changes in brain mechanisms that support both music and language cognition.
Correlated microtiming deviations in jazz and rock music
Sogorski, Mathias; Geisel, Theo
2018-01-01
Musical rhythms performed by humans typically show temporal fluctuations. While they have been characterized in simple rhythmic tasks, it is an open question what is the nature of temporal fluctuations, when several musicians perform music jointly in all its natural complexity. To study such fluctuations in over 100 original jazz and rock/pop recordings played with and without metronome we developed a semi-automated workflow allowing the extraction of cymbal beat onsets with millisecond precision. Analyzing the inter-beat interval (IBI) time series revealed evidence for two long-range correlated processes characterized by power laws in the IBI power spectral densities. One process dominates on short timescales (t < 8 beats) and reflects microtiming variability in the generation of single beats. The other dominates on longer timescales and reflects slow tempo variations. Whereas the latter did not show differences between musical genres (jazz vs. rock/pop), the process on short timescales showed higher variability for jazz recordings, indicating that jazz makes stronger use of microtiming fluctuations within a measure than rock/pop. Our results elucidate principles of rhythmic performance and can inspire algorithms for artificial music generation. By studying microtiming fluctuations in original music recordings, we bridge the gap between minimalistic tapping paradigms and expressive rhythmic performances. PMID:29364920
Bones, Oliver; Plack, Christopher J
2015-03-04
When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological "frequency-following response." The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. Copyright © 2015 Bones and Plack.
One in the Dance: Musical Correlates of Group Synchrony in a Real-World Club Environment
Ellamil, Melissa; Berson, Joshua; Wong, Jen; Buckley, Louis; Margulies, Daniel S.
2016-01-01
Previous research on interpersonal synchrony has mainly investigated small groups in isolated laboratory settings, which may not fully reflect the complex and dynamic interactions of real-life social situations. The present study expands on this by examining group synchrony across a large number of individuals in a naturalistic environment. Smartphone acceleration measures were recorded from participants during a music set in a dance club and assessed to identify how group movement synchrony covaried with various features of the music. In an evaluation of different preprocessing and analysis methods, giving more weight to front-back movement provided the most sensitive and reliable measure of group synchrony. During the club music set, group synchrony of torso movement was most strongly associated with pulsations that approximate walking rhythm (100–150 beats per minute). Songs with higher real-world play counts were also correlated with greater group synchrony. Group synchrony thus appears to be constrained by familiarity of the movement (walking action and rhythm) and of the music (song popularity). These findings from a real-world, large-scale social and musical setting can guide the development of methods for capturing and examining collective experiences in the laboratory and for effectively linking them to synchrony across people in daily life. PMID:27764167
Plack, Christopher J.
2015-01-01
When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or “consonance”. Complex frequency ratios, on the other hand, evoke feelings of tension or “dissonance”. Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological “frequency-following response.” The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. PMID:25740534
Effects of Asymmetric Cultural Experiences on the Auditory Pathway Evidence from Music
Wong, Patrick C. M.; Perrachione, Tyler K.; Margulis, Elizabeth Hellmuth
2009-01-01
Cultural experiences come in many different forms, such as immersion in a particular linguistic community, exposure to faces of people with different racial backgrounds, or repeated encounters with music of a particular tradition. In most circumstances, these cultural experiences are asymmetric, meaning one type of experience occurs more frequently than other types (e.g., a person raised in India will likely encounter the Indian todi scale more so than a Westerner). In this paper, we will discuss recent findings from our laboratories that reveal the impact of short- and long-term asymmetric musical experiences on how the nervous system responds to complex sounds. We will discuss experiments examining how musical experience may facilitate the learning of a tone language, how musicians develop neural circuitries that are sensitive to musical melodies played on their instrument of expertise, and how even everyday listeners who have little formal training are particularly sensitive to music of their own culture(s). An understanding of these cultural asymmetries is useful in formulating a more comprehensive model of auditory perceptual expertise that considers how experiences shape auditory skill levels. Such a model has the potential to aid in the development of rehabilitation programs for the efficacious treatment of neurologic impairments. PMID:19673772
A coupled duration-focused architecture for real-time music-to-score alignment.
Cont, Arshia
2010-06-01
The capacity for real-time synchronization and coordination is a common ability among trained musicians performing a music score that presents an interesting challenge for machine intelligence. Compared to speech recognition, which has influenced many music information retrieval systems, music's temporal dynamics and complexity pose challenging problems to common approximations regarding time modeling of data streams. In this paper, we propose a design for a real-time music-to-score alignment system. Given a live recording of a musician playing a music score, the system is capable of following the musician in real time within the score and decoding the tempo (or pace) of its performance. The proposed design features two coupled audio and tempo agents within a unique probabilistic inference framework that adaptively updates its parameters based on the real-time context. Online decoding is achieved through the collaboration of the coupled agents in a Hidden Hybrid Markov/semi-Markov framework, where prediction feedback of one agent affects the behavior of the other. We perform evaluations for both real-time alignment and the proposed temporal model. An implementation of the presented system has been widely used in real concert situations worldwide and the readers are encouraged to access the actual system and experiment the results.
One in the Dance: Musical Correlates of Group Synchrony in a Real-World Club Environment.
Ellamil, Melissa; Berson, Joshua; Wong, Jen; Buckley, Louis; Margulies, Daniel S
2016-01-01
Previous research on interpersonal synchrony has mainly investigated small groups in isolated laboratory settings, which may not fully reflect the complex and dynamic interactions of real-life social situations. The present study expands on this by examining group synchrony across a large number of individuals in a naturalistic environment. Smartphone acceleration measures were recorded from participants during a music set in a dance club and assessed to identify how group movement synchrony covaried with various features of the music. In an evaluation of different preprocessing and analysis methods, giving more weight to front-back movement provided the most sensitive and reliable measure of group synchrony. During the club music set, group synchrony of torso movement was most strongly associated with pulsations that approximate walking rhythm (100-150 beats per minute). Songs with higher real-world play counts were also correlated with greater group synchrony. Group synchrony thus appears to be constrained by familiarity of the movement (walking action and rhythm) and of the music (song popularity). These findings from a real-world, large-scale social and musical setting can guide the development of methods for capturing and examining collective experiences in the laboratory and for effectively linking them to synchrony across people in daily life.
Dolor, Rowena J; Greene, Sarah M; Thompson, Ella; Baldwin, Laura-Mae; Neale, Anne Victoria
2011-08-01
This project aimed to develop an open-access website providing adaptable resources to facilitate best practices for multisite research from initiation to closeout. methods: A web-based assessment was sent to the leadership of the Clinical and Translational Science Award (CTSA) Community Engagement Key Functions Committee (n= 38) and the CTSA-affiliated Primary Care Practice-based Research Networks (PBRN, n= 55). Respondents rated the benefits and barriers of multisite research, the utility of available resources, and indicated their level of interest in unavailable resources. Then, existing research resources were evaluated for relevance to multisite research, adaptability to other projects, and source credibility. Fifty-five (59%) of invited participants completed the survey. Top perceived benefits of multisite research were the ability to conduct community-relevant research through academic-community partnerships (34%) and accelerating translation of research into practice (31%). Top perceived barriers were lack of research infrastructure to support PBRNs and community partners (31%) and inadequate funding to support multisite collaborations (26%). Over 200 resources were evaluated, of which 120 unique resources were included in the website. The PRIMER Research Toolkit (http://www.researchtoolkit.org) provides an array of peer-reviewed resources to facilitate translational research for the conduct of multisite studies within PBRNs and community-based organizations. © 2011 Wiley Periodicals, Inc.
Dolor, Rowena J.; Greene, Sarah M.; Thompson, Ella; Baldwin, Laura‐Mae; Neale, Anne Victoria
2011-01-01
Abstract Objective: This project aimed to develop an open‐access website providing adaptable resources to facilitate best practices for multisite research from initiation to closeout. Methods: A web‐based assessment was sent to the leadership of the Clinical and Translational Science Award (CTSA) Community Engagement Key Functions Committee (n= 38) and the CTSA‐affiliated Primary Care Practice‐based Research Networks (PBRN, n= 55). Respondents rated the benefits and barriers of multisite research, the utility of available resources, and indicated their level of interest in unavailable resources. Then, existing research resources were evaluated for relevance to multisite research, adaptability to other projects, and source credibility. Results: Fifty‐five (59%) of invited participants completed the survey. Top perceived benefits of multisite research were the ability to conduct community‐relevant research through academic–community partnerships (34%) and accelerating translation of research into practice (31%). Top perceived barriers were lack of research infrastructure to support PBRNs and community partners (31%) and inadequate funding to support multisite collaborations (26%). Over 200 resources were evaluated, of which 120 unique resources were included in the website. Conclusion: The PRIMER Research Toolkit (http://www.researchtoolkit.org) provides an array of peer‐reviewed resources to facilitate translational research for the conduct of multisite studies within PBRNs and community‐based organizations. Clin Trans Sci 2011; Volume 4: 259–265 PMID:21884512
NASA Astrophysics Data System (ADS)
de León, Jesús Ponce; Beltrán, José Ramón
2012-12-01
In this study, a new method of blind audio source separation (BASS) of monaural musical harmonic notes is presented. The input (mixed notes) signal is processed using a flexible analysis and synthesis algorithm (complex wavelet additive synthesis, CWAS), which is based on the complex continuous wavelet transform. When the harmonics from two or more sources overlap in a certain frequency band (or group of bands), a new technique based on amplitude similarity criteria is used to obtain an approximation to the original partial information. The aim is to show that the CWAS algorithm can be a powerful tool in BASS. Compared with other existing techniques, the main advantages of the proposed algorithm are its accuracy in the instantaneous phase estimation, its synthesis capability and that the only input information needed is the mixed signal itself. A set of synthetically mixed monaural isolated notes have been analyzed using this method, in eight different experiments: the same instrument playing two notes within the same octave and two harmonically related notes (5th and 12th intervals), two different musical instruments playing 5th and 12th intervals, two different instruments playing non-harmonic notes, major and minor chords played by the same musical instrument, three different instruments playing non-harmonically related notes and finally the mixture of a inharmonic instrument (piano) and one harmonic instrument. The results obtained show the strength of the technique.
O’Kelly, Julian; James, L.; Palaniappan, R.; Taborin, J.; Fachner, J.; Magee, W. L.
2013-01-01
Assessment of awareness for those with disorders of consciousness is a challenging undertaking, due to the complex presentation of the population. Debate surrounds whether behavioral assessments provide greatest accuracy in diagnosis compared to neuro-imaging methods, and despite developments in both, misdiagnosis rates remain high. Music therapy may be effective in the assessment and rehabilitation with this population due to effects of musical stimuli on arousal, attention, and emotion, irrespective of verbal or motor deficits. However, an evidence base is lacking as to which procedures are most effective. To address this, a neurophysiological and behavioral study was undertaken comparing electroencephalogram (EEG), heart rate variability, respiration, and behavioral responses of 20 healthy subjects with 21 individuals in vegetative or minimally conscious states (VS or MCS). Subjects were presented with live preferred music and improvised music entrained to respiration (procedures typically used in music therapy), recordings of disliked music, white noise, and silence. ANOVA tests indicated a range of significant responses (p ≤ 0.05) across healthy subjects corresponding to arousal and attention in response to preferred music including concurrent increases in respiration rate with globally enhanced EEG power spectra responses (p = 0.05–0.0001) across frequency bandwidths. Whilst physiological responses were heterogeneous across patient cohorts, significant post hoc EEG amplitude increases for stimuli associated with preferred music were found for frontal midline theta in six VS and four MCS subjects, and frontal alpha in three VS and four MCS subjects (p = 0.05–0.0001). Furthermore, behavioral data showed a significantly increased blink rate for preferred music (p = 0.029) within the VS cohort. Two VS cases are presented with concurrent changes (p ≤ 0.05) across measures indicative of discriminatory responses to both music therapy procedures. A third MCS case study is presented highlighting how more sensitive selective attention may distinguish MCS from VS. The findings suggest that further investigation is warranted to explore the use of music therapy for prognostic indicators, and its potential to support neuroplasticity in rehabilitation programs. PMID:24399950
Human-based percussion and self-similarity detection in electroacoustic music
NASA Astrophysics Data System (ADS)
Mills, John Anderson, III
Electroacoustic music is music that uses electronic technology for the compositional manipulation of sound, and is a unique genre of music for many reasons. Analyzing electroacoustic music requires special measures, some of which are integrated into the design of a preliminary percussion analysis tool set for electroacoustic music. This tool set is designed to incorporate the human processing of music and sound. Models of the human auditory periphery are used as a front end to the analysis algorithms. The audio properties of percussivity and self-similarity are chosen as the focus because these properties are computable and informative. A collection of human judgments about percussion was undertaken to acquire clearly specified, sound-event dimensions that humans use as a percussive cue. A total of 29 participants was asked to make judgments about the percussivity of 360 pairs of synthesized snare-drum sounds. The grouped results indicate that of the dimensions tested rise time is the strongest cue for percussivity. String resonance also has a strong effect, but because of the complex nature of string resonance, it is not a fundamental dimension of a sound event. Gross spectral filtering also has an effect on the judgment of percussivity but the effect is weaker than for rise time and string resonance. Gross spectral filtering also has less effect when the stronger cue of rise time is modified simultaneously. A percussivity-profile algorithm (PPA) is designed to identify those instants in pieces of music that humans also would identify as percussive. The PPA is implemented using a time-domain, channel-based approach and psychoacoustic models. The input parameters are tuned to maximize performance at matching participants' choices in the percussion-judgment collection. After the PPA is tuned, the PPA then is used to analyze pieces of electroacoustic music. Real electroacoustic music introduces new challenges for the PPA, though those same challenges might affect human judgment as well. A similarity matrix is combined with the PPA in order to find self-similarity in the percussive sounds of electroacoustic music. This percussive similarity matrix is then used to identify structural characteristics in two pieces of electroacoustic music.
Jones, Rachel A; Warren, Janet M; Okely, Anthony D; Collins, Clare E; Morgan, Philip J; Cliff, Dylan P; Burrows, Tracy; Cleary, Jane; Baur, Louise A
2010-11-01
The purposes of this article are to (a) outline findings from secondary or process outcome data of the Hunter Illawarra Kids Challenge Using Parent Support (HIKCUPS) study and (b) inform the design and development of future research interventions and practice in the management of child obesity. Data were collected by means of facilitator evaluations, independent session observation, attendance records, and parent questionnaires. Internal validity and reliability of the program delivery were high. All parents reported positive changes in their children as a result of the physical activity program, the dietary modification program, or both. Most participants completed the home activities, but more than half reported that finding time to do them was problematic. Facilitator review indicated that future programs should specifically cater to children of similar age or same sex, allow adequate time for explanation of complex nutritional concepts, and use intrinsic motivators for participants. Recommendations for future interventions, specifically the implementation of subsequent HIKCUPS or other multisite effectiveness studies, are detailed.
Streamflow prediction using multi-site rainfall obtained from hydroclimatic teleconnection
NASA Astrophysics Data System (ADS)
Kashid, S. S.; Ghosh, Subimal; Maity, Rajib
2010-12-01
SummarySimultaneous variations in weather and climate over widely separated regions are commonly known as "hydroclimatic teleconnections". Rainfall and runoff patterns, over continents, are found to be significantly teleconnected, with large-scale circulation patterns, through such hydroclimatic teleconnections. Though such teleconnections exist in nature, it is very difficult to model them, due to their inherent complexity. Statistical techniques and Artificial Intelligence (AI) tools gain popularity in modeling hydroclimatic teleconnection, based on their ability, in capturing the complicated relationship between the predictors (e.g. sea surface temperatures) and predictand (e.g., rainfall). Genetic Programming is such an AI tool, which is capable of capturing nonlinear relationship, between predictor and predictand, due to its flexible functional structure. In the present study, gridded multi-site weekly rainfall is predicted from El Niño Southern Oscillation (ENSO) indices, Equatorial Indian Ocean Oscillation (EQUINOO) indices, Outgoing Longwave Radiation (OLR) and lag rainfall at grid points, over the catchment, using Genetic Programming. The predicted rainfall is further used in a Genetic Programming model to predict streamflows. The model is applied for weekly forecasting of streamflow in Mahanadi River, India, and satisfactory performance is observed.
Butel, Jean; Braun, Kathryn L; Novotny, Rachel; Acosta, Mark; Castro, Rose; Fleming, Travis; Powers, Julianne; Nigg, Claudio R
2015-12-01
Addressing complex chronic disease prevention, like childhood obesity, requires a multi-level, multi-component culturally relevant approach with broad reach. Models are lacking to guide fidelity monitoring across multiple levels, components, and sites engaged in such interventions. The aim of this study is to describe the fidelity-monitoring approach of The Children's Healthy Living (CHL) Program, a multi-level multi-component intervention in five Pacific jurisdictions. A fidelity-monitoring rubric was developed. About halfway during the intervention, community partners were randomly selected and interviewed independently by local CHL staff and by Coordinating Center representatives to assess treatment fidelity. Ratings were compared and discussed by local and Coordinating Center staff. There was good agreement between the teams (Kappa = 0.50, p < 0.001), and intervention improvement opportunities were identified through data review and group discussion. Fidelity for the multi-level, multi-component, multi-site CHL intervention was successfully assessed, identifying adaptations as well as ways to improve intervention delivery prior to the end of the intervention.
From Vivaldi to Beatles and back: predicting lateralized brain responses to music.
Alluri, Vinoo; Toiviainen, Petri; Lund, Torben E; Wallentin, Mikkel; Vuust, Peter; Nandi, Asoke K; Ristaniemi, Tapani; Brattico, Elvira
2013-12-01
We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts. © 2013.
Borella, Erika; Carretti, Barbara; Grassi, Massimo; Nucci, Massimo; Sciore, Roberta
2014-01-01
There are evidences showing that music can affect cognitive performance by improving our emotional state. The aim of the current study was to analyze whether age-related differences between young and older adults in a Working Memory (WM) Span test in which the stimuli to be recalled have a different valence (i.e., neutral, positive, or negative words), are sensitive to exposure to music. Because some previous studies showed that emotional words can sustain older adults' performance in WM, we examined whether listening to music could enhance the benefit of emotional material, with respect to neutral words, on WM performance decreasing the age-related difference between younger and older adults. In particular, the effect of two types of music (Mozart vs. Albinoni), which differ in tempo, arousal and mood induction, on age-related differences in an affective version of the Operation WM Span task was analyzed. Results showed no effect of music on the WM test regardless of the emotional content of the music (Mozart vs. Albinoni). However, a valence effect for the words in the WM task was found with a higher number of negative words recalled with respect to positive and neutral ones in both younger and older adults. When individual differences in terms of accuracy in the processing phase of the Operation Span task were considered, only younger low-performing participants were affected by the type music, with the Albinoni condition that lowered their performance with respect to the Mozart condition. Such a result suggests that individual differences in WM performance, at least when young adults are considered, could be affected by the type of music. Altogether, these findings suggest that complex span tasks, such as WM tasks, along with age-related differences are not sensitive to music effects.
Küssner, Mats B; de Groot, Annette M B; Hofman, Winni F; Hillen, Marij A
2016-01-01
As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is-partly due to a lack of theory-driven research-no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on foreign vocabulary learning. Based on Eysenck's theory of personality we predicted that individuals with a high level of cortical arousal should perform worse when learning with background music compared to silence, whereas individuals with a low level of cortical arousal should be unaffected by background music or benefit from it. Participants were tested in a paired-associate learning paradigm consisting of three immediate word recall tasks, as well as a delayed recall task one week later. Baseline cortical arousal assessed with spontaneous EEG measurement in silence prior to the learning rounds was used for the analyses. Results revealed no interaction between cortical arousal and the learning condition (background music vs. silence). Instead, we found an unexpected main effect of cortical arousal in the beta band on recall, indicating that individuals with high beta power learned more vocabulary than those with low beta power. To substantiate this finding we conducted an exact replication of the experiment. Whereas the main effect of cortical arousal was only present in a subsample of participants, a beneficial main effect of background music appeared. A combined analysis of both experiments suggests that beta power predicts the performance in the word recall task, but that there is no effect of background music on foreign vocabulary learning. In light of these findings, we discuss whether searching for effects of background music on foreign vocabulary learning, independent of factors such as inter-individual differences and task complexity, might be a red herring. Importantly, our findings emphasize the need for sufficiently powered research designs and exact replications of theory-driven experiments when investigating effects of background music and inter-individual variation on task performance.
de Groot, Annette M. B.; Hofman, Winni F.; Hillen, Marij A.
2016-01-01
As tantalizing as the idea that background music beneficially affects foreign vocabulary learning may seem, there is—partly due to a lack of theory-driven research—no consistent evidence to support this notion. We investigated inter-individual differences in the effects of background music on foreign vocabulary learning. Based on Eysenck’s theory of personality we predicted that individuals with a high level of cortical arousal should perform worse when learning with background music compared to silence, whereas individuals with a low level of cortical arousal should be unaffected by background music or benefit from it. Participants were tested in a paired-associate learning paradigm consisting of three immediate word recall tasks, as well as a delayed recall task one week later. Baseline cortical arousal assessed with spontaneous EEG measurement in silence prior to the learning rounds was used for the analyses. Results revealed no interaction between cortical arousal and the learning condition (background music vs. silence). Instead, we found an unexpected main effect of cortical arousal in the beta band on recall, indicating that individuals with high beta power learned more vocabulary than those with low beta power. To substantiate this finding we conducted an exact replication of the experiment. Whereas the main effect of cortical arousal was only present in a subsample of participants, a beneficial main effect of background music appeared. A combined analysis of both experiments suggests that beta power predicts the performance in the word recall task, but that there is no effect of background music on foreign vocabulary learning. In light of these findings, we discuss whether searching for effects of background music on foreign vocabulary learning, independent of factors such as inter-individual differences and task complexity, might be a red herring. Importantly, our findings emphasize the need for sufficiently powered research designs and exact replications of theory-driven experiments when investigating effects of background music and inter-individual variation on task performance. PMID:27537520
Reduction of the Harmonic Series Influences Musical Enjoyment with Cochlear Implants
Nemer, John S.; Kohlberg, Gavriel D.; Mancuso, Dean M.; Griffin, Brianna M.; Certo, Michael V.; Chen, Stephanie Y.; Chun, Michael B.; Spitzer, Jaclyn B.; Lalwani, Anil K.
2016-01-01
Objective Cochlear implantation is associated with poor music perception and enjoyment. Reducing music complexity has been shown to enhance music enjoyment in cochlear implant (CI) recipients. In this study, we assess the impact of harmonic series reduction on music enjoyment. Study Design Prospective analysis of music enjoyment in normal-hearing (NH) individuals and CI recipients. Setting Single tertiary academic medical center. Patients NH adults (N=20) and CI users (N=8) rated the Happy Birthday song on three validated enjoyment modalities–musicality, pleasantness, and naturalness. Intervention Subjective rating of music excerpts. Main outcome measures Participants listened to seven different instruments play the melody, each with five levels of harmonic reduction (Full|F3+F2+F1+F0|F2+F1+F0|F1+F0|F0). NH participants listened to the segments both with and without CI simulation. Linear mixed effect models (LME) and likelihood ratio tests were used to assess the impact of harmonic reduction on enjoyment. Results NH listeners without simulation rated segments with the first four harmonics (F3+F2+F1+F0) most pleasant and natural (p<0.001|p=0.004). NH listeners with simulation rated the first harmonic alone (F0) most pleasant and natural (p<0.001|p=0.003). Their ratings demonstrated a positive linear relationship between harmonic reduction and both pleasantness (slope estimate=0.030|SE=0.004|p<0.001|LME) and naturalness (slope estimate=0.012|SE=0.003|p=0.003|LME). CI recipients also found the first harmonic alone (F0) to be most pleasant (p=0.003), with a positive linear relationship between harmonic reduction and pleasantness (slope estimate=0.029|SE=0.008|p<0.001|LME). Conclusions Harmonic series reduction increases music enjoyment in CI and NH individuals with or without CI simulation. Therefore, minimization of the harmonics may be a useful strategy for enhancing musical enjoyment among both NH and CI listeners. PMID:27755358
Reduction of the Harmonic Series Influences Musical Enjoyment With Cochlear Implants.
Nemer, John S; Kohlberg, Gavriel D; Mancuso, Dean M; Griffin, Brianna M; Certo, Michael V; Chen, Stephanie Y; Chun, Michael B; Spitzer, Jaclyn B; Lalwani, Anil K
2017-01-01
Cochlear implantation is associated with poor music perception and enjoyment. Reducing music complexity has been shown to enhance music enjoyment in cochlear implant (CI) recipients. In this study, we assess the impact of harmonic series reduction on music enjoyment. Prospective analysis of music enjoyment in normal-hearing (NH) individuals and CI recipients. Single tertiary academic medical center. NH adults (N = 20) and CI users (N = 8) rated the Happy Birthday song on three validated enjoyment modalities-musicality, pleasantness, and naturalness. Subjective rating of music excerpts. Participants listened to seven different instruments play the melody, each with five levels of harmonic reduction (Full, F3+F2+F1+F0, F2+F1+F0, F1+F0, F0). NH participants listened to the segments both with and without CI simulation. Linear mixed effect models (LME) and likelihood ratio tests were used to assess the impact of harmonic reduction on enjoyment. NH listeners without simulation rated segments with the first four harmonics (F3+F2+F1+F0) most pleasant and natural (p <0.001, p = 0.004). NH listeners with simulation rated the first harmonic alone (F0) most pleasant and natural (p <0.001, p = 0.003). Their ratings demonstrated a positive linear relationship between harmonic reduction and both pleasantness (slope estimate = 0.030, SE = 0.004, p <0.001, LME) and naturalness (slope estimate = 0.012, SE = 0.003, p = 0.003, LME). CI recipients also found the first harmonic alone (F0) to be most pleasant (p = 0.003), with a positive linear relationship between harmonic reduction and pleasantness (slope estimate = 0.029, SE = 0.008, p <0.001, LME). Harmonic series reduction increases music enjoyment in CI and NH individuals with or without CI simulation. Therefore, minimization of the harmonics may be a useful strategy for enhancing musical enjoyment among both NH and CI listeners.
Challenges and lessons learned in conducting comparative-effectiveness trials.
Herrick, Linda M; Locke, G Richard; Zinsmeister, Alan R; Talley, Nicholas J
2012-05-01
The current health-care environment is demanding evidence-based medicine that relies on clinical trials as the basis for decisions. Clinician investigators are more often finding that they are personally responsible for coordinating large, multisite trials. We present strategies for successful implementation and management of multisite clinical trials and knowledge gained through an international, multisite randomized clinical trial. Topics include team composition, regulatory requirements, study organization and governance, communication strategies, recruitment and retention efforts, budget, technology transfer, and publication.
Challenges and Lessons Learned in Conducting Comparative-Effectiveness Trials
Herrick, Linda M.; Locke, G. Richard; Zinsmeister, Alan R.; Talley, Nicholas J.
2014-01-01
The current health-care environment is demanding evidence-based medicine that relies on clinical trials as the basis for decisions. Clinician investigators are more often finding that they are personally responsible for coordinating large, multisite trials. We present strategies for successful implementation and management of multisite clinical trials and knowledge gained through an international, multisite randomized clinical trial. Topics include team composition, regulatory requirements, study organization and governance, communication strategies, recruitment and retention efforts, budget, technology transfer, and publication. PMID:22552235
Polston, J.E.; Rubbinaccio, H.Y.; Morra, J.T.; Sell, E.M.; Glick, S.D.
2011-01-01
Associations between drugs of abuse and cues facilitate the acquisition and maintenance of addictive behaviors. Although significant research has been done to elucidate the role that simple discriminative or discrete conditioned stimuli (e.g., a tone or a light) play in addiction, less is known about complex environmental cues. The purpose of the present study was to examine the role of a musical conditioned stimulus by assessing locomotor activity and in vivo microdialysis. Two groups of rats were given non-contingent injections of methamphetamine (1.0 mg/kg) or vehicle and placed in standard conditioning chambers. During these conditioning sessions both groups were exposed to a continuous conditioned stimulus, in the form of a musical selection (“Four” by Miles Davis) played repeatedly for ninety minutes. After seven consecutive conditioning days subjects were given one day of rest, and subsequently tested for locomotor activity or dopamine release in the absence of drug while the musical conditioned stimulus was continually present. The brain regions examined included the basolateral amygdala, nucleus accumbens, and prefrontal cortex. The results show that music is an effective contextual conditioned stimulus, significantly increasing locomotor activity after repeated association with methamphetamine. Furthermore, this musical conditioned stimulus significantly increased extracellular dopamine levels in the basolateral amygdala and nucleus accumbens. These findings support other evidence showing the importance of these brain regions in conditioned learning paradigms, and demonstrate that music is an effective conditioned stimulus warranting further investigation. PMID:21145911
Tanaka, Shoji; Kirino, Eiji
2016-01-01
Conceiving concrete mental imagery is critical for skillful musical expression and performance. The precuneus, a core component of the default mode network (DMN), is a hub of mental image processing that participates in functions such as episodic memory retrieval and imagining future events. The precuneus connects with many brain regions in the frontal, parietal, temporal, and occipital cortices. The aim of this study was to examine the effects of long-term musical training on the resting-state functional connectivity of the precuneus. Our hypothesis was that the functional connectivity of the precuneus is altered in musicians. We analyzed the functional connectivity of the precuneus using resting-state functional magnetic resonance imaging (fMRI) data recorded in female university students majoring in music and nonmusic disciplines. The results show that the music students had higher functional connectivity of the precuneus with opercular/insular regions, which are associated with interoceptive and emotional processing; Heschl's gyrus (HG) and the planum temporale (PT), which process complex tonal information; and the lateral occipital cortex (LOC), which processes visual information. Connectivity of the precuneus within the DMN did not differ between the two groups. Our finding suggests that functional connections between the precuneus and the regions outside of the DMN play an important role in musical performance. We propose that a neural network linking the precuneus with these regions contributes to translate mental imagery into information relevant to musical performance.
Tanaka, Shoji; Kirino, Eiji
2016-01-01
Conceiving concrete mental imagery is critical for skillful musical expression and performance. The precuneus, a core component of the default mode network (DMN), is a hub of mental image processing that participates in functions such as episodic memory retrieval and imagining future events. The precuneus connects with many brain regions in the frontal, parietal, temporal, and occipital cortices. The aim of this study was to examine the effects of long-term musical training on the resting-state functional connectivity of the precuneus. Our hypothesis was that the functional connectivity of the precuneus is altered in musicians. We analyzed the functional connectivity of the precuneus using resting-state functional magnetic resonance imaging (fMRI) data recorded in female university students majoring in music and nonmusic disciplines. The results show that the music students had higher functional connectivity of the precuneus with opercular/insular regions, which are associated with interoceptive and emotional processing; Heschl’s gyrus (HG) and the planum temporale (PT), which process complex tonal information; and the lateral occipital cortex (LOC), which processes visual information. Connectivity of the precuneus within the DMN did not differ between the two groups. Our finding suggests that functional connections between the precuneus and the regions outside of the DMN play an important role in musical performance. We propose that a neural network linking the precuneus with these regions contributes to translate mental imagery into information relevant to musical performance. PMID:27445765
Mudge, Alison M; Banks, Merrilyn D; Barnett, Adrian G; Blackberry, Irene; Graves, Nicholas; Green, Theresa; Harvey, Gillian; Hubbard, Ruth E; Inouye, Sharon K; Kurrle, Sue; Lim, Kwang; McRae, Prue; Peel, Nancye M; Suna, Jessica; Young, Adrienne M
2017-01-09
Older inpatients are at risk of hospital-associated geriatric syndromes including delirium, functional decline, incontinence, falls and pressure injuries. These contribute to longer hospital stays, loss of independence, and death. Effective interventions to reduce geriatric syndromes remain poorly implemented due to their complexity, and require an organised approach to change care practices and systems. Eat Walk Engage is a complex multi-component intervention with structured implementation, which has shown reduced geriatric syndromes and length of stay in pilot studies at one hospital. This study will test effectiveness of implementing Eat Walk Engage using a multi-site cluster randomised trial to inform transferability of this intervention. A hybrid study design will evaluate the effectiveness and implementation strategy of Eat Walk Engage in a real-world setting. A multisite cluster randomised study will be conducted in 8 medical and surgical wards in 4 hospitals, with one ward in each site randomised to implement Eat Walk Engage (intervention) and one to continue usual care (control). Intervention wards will be supported to develop and implement locally tailored strategies to enhance early mobility, nutrition, and meaningful activities. Resources will include a trained, mentored facilitator, audit support, a trained healthcare assistant, and support by an expert facilitator team using the i-PARIHS implementation framework. Patient outcomes and process measures before and after intervention will be compared between intervention and control wards. Primary outcomes are any hospital-associated geriatric syndrome (delirium, functional decline, falls, pressure injuries, new incontinence) and length of stay. Secondary outcomes include discharge destination; 30-day mortality, function and quality of life; 6 month readmissions; and cost-effectiveness. Process measures including patient interviews, activity mapping and mealtime audits will inform interventions in each site and measure improvement progress. Factors influencing the trajectory of implementation success will be monitored on implementation wards. Using a hybrid design and guided by an explicit implementation framework, the CHERISH study will establish the effectiveness, cost-effectiveness and transferability of a successful pilot program for improving care of older inpatients, and identify features that support successful implementation. ACTRN12615000879561 registered prospectively 21/8/2015.
Self-assembly of metal nanostructures on binary alloy surfaces
Duguet, T.; Han, Yong; Yuen, Chad; Jing, Dapeng; Ünal, Barış; Evans, J. W.; Thiel, P. A.
2011-01-01
Deposition of metals on binary alloy surfaces offers new possibilities for guiding the formation of functional metal nanostructures. This idea is explored with scanning tunneling microscopy studies and atomistic-level analysis and modeling of nonequilibrium island formation. For Au/NiAl(110), complex monolayer structures are found and compared with the simple fcc(110) bilayer structure recently observed for Ag/NiAl(110). We also consider a more complex codeposition system, (Ni + Al)/NiAl(110), which offers the opportunity for fundamental studies of self-growth of alloys including deviations for equilibrium ordering. A general multisite lattice-gas model framework enables analysis of structure selection and morphological evolution in these systems. PMID:21097706
Mapping aesthetic musical emotions in the brain.
Trost, Wiebke; Ethofer, Thomas; Zentner, Marcel; Vuilleumier, Patrik
2012-12-01
Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.
Motor responses to a steady beat.
Schaefer, Rebecca S; Overy, Katie
2015-03-01
It is increasingly well established that music containing an isochronous pulse elicits motor responses at the levels of both brain and behavior. Such motor responses are often used in pedagogical and clinical practice to induce movement, particularly where motor functions are impaired. However, the complex nature of such apparently universal human responses has, arguably, not received adequate research attention to date. In particular, it should be noted that many adults, including those with disabilities, find it somewhat difficult to synchronize their movements with a beat with perfect accuracy; indeed, perfecting the skill of being musically "in time" can take years of training during childhood. Further research is needed on the nature of both the specificity and range of motor responses that can arise from the perception of a steady auditory pulse, with different populations, musical stimuli, conditions, and required levels of accuracy in order to better understand and capture the potential value of the musical beat as a pedagogical and therapeutic tool. © 2015 New York Academy of Sciences.
Fractal-Based Analysis of the Influence of Music on Human Respiration
NASA Astrophysics Data System (ADS)
Reza Namazi, H.
An important challenge in respiration related studies is to investigate the influence of external stimuli on human respiration. Auditory stimulus is an important type of stimuli that influences human respiration. However, no one discovered any trend, which relates the characteristics of the auditory stimuli to the characteristics of the respiratory signal. In this paper, we investigate the correlation between auditory stimuli and respiratory signal from fractal point of view. We found out that the fractal structure of respiratory signal is correlated with the fractal structure of the applied music. Based on the obtained results, the music with greater fractal dimension will result in respiratory signal with smaller fractal dimension. In order to verify this result, we benefit from approximate entropy. The results show the respiratory signal will have smaller approximate entropy by choosing the music with smaller approximate entropy. The method of analysis could be further investigated to analyze the variations of different physiological time series due to the various types of stimuli when the complexity is the main concern.
Magee, Wendy L; O'Kelly, Julian
2015-03-01
Patients with prolonged disorders of consciousness (PDOC) stemming from acquired brain injury present one of the most challenging clinical populations in neurological rehabilitation. Because of the complex clinical presentation of PDOC patients, treatment teams are confronted with many medicolegal, ethical, philosophical, moral, and religious issues in day-to-day care. Accurate diagnosis is of central concern, relying on creative approaches from skilled clinical professionals using combined behavioral and neurophysiological measures. This paper presents the latest evidence for using music as a diagnostic tool with PDOC, including recent developments in music therapy interventions and measurement. We outline standardized clinical protocols and behavioral measures to produce diagnostic outcomes and examine recent research illustrating a range of benefits of music-based methods at behavioral, cardiorespiratory, and cortical levels using video, electrocardiography, and electroencephalography methods. These latest developments are discussed in the context of evidence-based practice in rehabilitation with clinical populations. © 2014 New York Academy of Sciences.
Mapping Aesthetic Musical Emotions in the Brain
Ethofer, Thomas; Zentner, Marcel; Vuilleumier, Patrik
2012-01-01
Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions. PMID:22178712
NASA Astrophysics Data System (ADS)
Majumdar, Dhrubajyoti; Surendra Babu, M. S.; Das, Sourav; Biswas, Jayanta Kumar; Mondal, Monojit; Hazra, Suman
2017-06-01
A unique thiocyanato linked 1D chain of Zn(II) coordination polymer [Zn2L1(μ1,3-SCN)(η1SCN)]n (1) has been synthesized using potential multisite compartmental N,O donor Schiff base blocker ligand (L1H2) in presence of Zn(OAc)2 and KSCN. The Schiff base ligand [N, N‧-bis(3-methoxysalicylidenimino)-1,3-daminopropane] (L1H2) is 2:1 M ratio condensation product of O-vaniline and 1,3-diaminopropane in methanol medium. The characterization of Complex 1 was accomplished by means of different micro analytical techniques like elemental analyses, IR, UV-Vis, 1H NMR, emission spectroscopy and Single X-ray crystallographic study. Complex 1 crystallizes in Orthorhombic system, space group Pbca, with values a = 11.579(2), b = 18.538(3), and c = 22.160(4) Å; α = β = γ = 90.00°; V = 4756.6(14) and Z = 8. The single crystal X-ray revealed that the one dimensional chain system with the repeating unit [Zn2(μ1,3-SCN)(η1SCN)(L1)]n bridge by an end to end μ1,3 thiocyanate anion. Within each repeating unit two different types of Zn(II) ions are present. One of these is five-coordinate in a square pyramidal geometry while the other is six-coordinate in an octahedral geometry. A brief but lucid comparative approach has been demonstrated in between Schiff base (L1H2) and complex 1 with respect to their photoluminescence activities. Active luminescence behavior of complex 1 in presence of ligand (L1H2) is due to quenching of PET process which is mediated by 'chelating effect'. Complex 1 exhibits strong antimicrobial efficacy against some important Gram + ve and Gram -ve bacteria. Apart from antimicrobial potential, a combined experimental and theoretical investigation has been performed via DFT on molecular structure of complex 1 with respect to Hirshfeld surface analysis.
Harmonic Medicine: The Influence of Music Over Mind and Medical Practice
Kobets, Andrew Joshua
2011-01-01
The Yale Medical Orchestra displayed exceptional talent and inspiration as it performed a timeless composition to celebrate Yale School of Medicine’s bicentennial anniversary during a December 2010 concert. Under the leadership of musical directors Robert Smith and Adrian Slywotzky, the richly emotional meditations of Mendelssohn, Dvorak, Schubert, and Yale’s own Thomas C. Duffy filled the minds and hearts of an audience as diverse as the orchestra. I intend to retrace the steps of that melodic journey in this essay, fully aware of the limits imposed on me to recreate the aural art form through the medium of text. While these symbols can be pale representations of the beauty and complexity of the music, I hope they will be the building blocks for the emotional experience of the audience. I describe the works’ inception and their salient musical features and then review what we know about the effects of melody, meter, and timbre on our brains. My intentions are to provide evidence to encourage the further use of music as a tool in medical practice, provide interest in the works explored by the Yale orchestra, support the orchestra itself, and investigate a personal passion. PMID:21698051
The right inferior frontal gyrus processes nested non-local dependencies in music.
Cheung, Vincent K M; Meyer, Lars; Friederici, Angela D; Koelsch, Stefan
2018-02-28
Complex auditory sequences known as music have often been described as hierarchically structured. This permits the existence of non-local dependencies, which relate elements of a sequence beyond their temporal sequential order. Previous studies in music have reported differential activity in the inferior frontal gyrus (IFG) when comparing regular and irregular chord-transitions based on theories in Western tonal harmony. However, it is unclear if the observed activity reflects the interpretation of hierarchical structure as the effects are confounded by local irregularity. Using functional magnetic resonance imaging (fMRI), we found that violations to non-local dependencies in nested sequences of three-tone musical motifs in musicians elicited increased activity in the right IFG. This is in contrast to similar studies in language which typically report the left IFG in processing grammatical syntax. Effects of increasing auditory working demands are moreover reflected by distributed activity in frontal and parietal regions. Our study therefore demonstrates the role of the right IFG in processing non-local dependencies in music, and suggests that hierarchical processing in different cognitive domains relies on similar mechanisms that are subserved by domain-selective neuronal subpopulations.
Mirror-Like Mechanisms and Music
D'Ausilio, Alessandro
2009-01-01
The neural processes underlying sensory-motor integration have always attracted strong interest. The classic view is that action and perception are two extremes of mental operations. In the past 2 decades, though, a large number of discoveries have indeed refuted such an interpretation in favor of a more integrated view. Specifically, the discovery of mirror neurons in monkey premotor cortex is a rather strong demonstration that sensory and motor processes share the same neural substrates. In fact, these cells show complex sensory-motor properties, such that observed, heard, or executed goal-directed actions could equally activate these neurons. On the other hand, the neuroscience of music has similarly emerged as an active and productive field of research. In fact, music-related behaviors are a useful model of action-perception mechanisms and how they develop through training. More recently, these two lines of research have begun to intersect into a novel branch of research. As a consequence, it has been proposed recently that mirror-like mechanisms might be at the basis of human music perception-production abilities. The scope of the present short review is to set the scientific background for mirror-like mechanisms in music by examining recent published data. PMID:20024515
[MusicPlayTherapy--a parent-child psychotherapy for children 0-4 years old].
Stumptner, Katrin; Thomsen, Cornelia
2005-10-01
The early stage of building up the parent-child relationship is especially important. It is the basis for the child's development of the ability to relate to others and his or her further emotional, social and cognitive development. In this important early phase various risk factors may alienate parents from their intuitive parental competence towards their children. Such interaction problems indicate an intervention in the form of parent-children psychotherapy. This constitutes an entry point for the concept of MusicPlayTherapy (MPT): The early relationship is characterized mainly by complex communication sequences that address the senses at all levels. Therefore, the MPT concept integrates music as medium to communicate and opens up a playing space for play that allows emotions and experiences to be expressed. The components of music such as rhythm, sound, and melody stimulate babies and toddlers to express, play, and communicate preverbally. We work with the child and a parent in the MusicPlayTherapy sessions. Parents learn again to play and thereby learn to reach their children emotionally and to communicate with them. We complement the therapy sessions by counselling sessions with both parents.
Influence of musical training on sensitivity to temporal fine structure.
Mishra, Srikanta K; Panda, Manasa R; Raj, Swapna
2015-04-01
The objective of this study was to extend the findings that temporal fine structure encoding is altered in musicians by examining sensitivity to temporal fine structure (TFS) in an alternative (non-Western) musician model that is rarely adopted--Indian classical music. The sensitivity to TFS was measured by the ability to discriminate two complex tones that differed in TFS but not in envelope repetition rate. Sixteen South Indian classical (Carnatic) musicians and 28 non-musicians with normal hearing participated in this study. Musicians have significantly lower relative frequency shift at threshold in the TFS task compared to non-musicians. A significant negative correlation was observed between years of musical experience and relative frequency shift at threshold in the TFS task. Test-retest repeatability of thresholds in the TFS tasks was similar for both musicians and non-musicians. The enhanced performance of the Carnatic-trained musicians suggests that the musician advantage for frequency and harmonicity discrimination is not restricted to training in Western classical music, on which much of the previous research on musical training has narrowly focused. The perceptual judgments obtained from non-musicians were as reliable as those of musicians.
Lai, Claudia K Y; Lai, Daniel L L; Ho, Jacqueline S C; Wong, Kitty K Y; Cheung, Daphne S K
2016-03-01
The music-with-movement intervention is particularly suitable for people with dementia because their gross motor ability is preserved until the later stage of dementia. This study examines the effect of music-with-movement on reducing anxiety, sleep disturbances, and improving the wellbeing of people with dementia. This paper reports the first stage of the study - developing the intervention protocol that staff can use to teach family caregivers. A registered music therapist developed a music-with-movement protocol and taught staff of two social service centers over five weekly 1.5 h sessions, with center-in-charges (social workers and occupational therapists) and our research team joining these sessions to provide comments from their professional perspective. Each discipline had different expectations about the content; therefore, numerous meetings and discussions were held to bridge these differences and fine-tune the protocol. Few healthcare professionals doubt the merits of interdisciplinary collaboration at all levels of health promotion. In practice, interdisciplinary collaboration is complex and requires commitment. Openness and persistence is required from all stakeholders to achieve a successful intervention for consumers. © 2015 Wiley Publishing Asia Pty Ltd.
Interpreting expressive performance through listener judgments of musical tension
Farbood, Morwaread M.; Upham, Finn
2013-01-01
This study examines listener judgments of musical tension for a recording of a Schubert song and its harmonic reduction. Continuous tension ratings collected in an experiment and quantitative descriptions of the piece's musical features, include dynamics, pitch height, harmony, onset frequency, and tempo, were analyzed from two different angles. In the first part of the analysis, the different processing timescales for disparate features contributing to tension were explored through the optimization of a predictive tension model. The results revealed the optimal time windows for harmony were considerably longer (~22 s) than for any other feature (~1–4 s). In the second part of the analysis, tension ratings for the individual verses of the song and its harmonic reduction were examined and compared. The results showed that although the average tension ratings between verses were very similar, differences in how and when participants reported tension changes highlighted performance decisions made in the interpretation of the score, ambiguity in tension implications of the music, and the potential importance of contrast between verses and phrases. Analysis of the tension ratings for the harmonic reduction also provided a new perspective for better understanding how complex musical features inform listener tension judgments. PMID:24416024
A multidimensional study of preference judgements for excerpts of music.
Tekman, H G
1998-06-01
Subjects evaluated how well they liked each one of 38 short excerpts of Western music and also judged how well each excerpt was described by 23 adjectives. How well an excerpt was liked was negatively correlated with the use of the adjectives 'unpleasant', 'complex', 'tense', and 'dissonant'. The use of the adjectives 'melodic', 'pleasant', 'sentimental', and 'familiar', was positively related to how well an excerpt was liked. The correlations between the preference judgments of different excerpts were taken as a measure of similarity between the excerpts. This measure of similarity was used in a multidimensional scaling analysis with the purpose of identifying dimension that may determine preferences for music. In the six-dimensional space generated (stress value was .255) coordinates on three of the dimensions could be predicted, in part, by the use of the adjectives 'sentimental', 'fast', and a combination of 'high pitched', 'calm', and 'sad', respectively. Thus, some clues to the factors underlying musical preferences were obtained. Although a large number of dimensions were necessary and all of them could not be interpreted meaningfully here, this method may be developed as a way of conceptualizing musical preferences with a more careful selection of excerpts and more detailed assessment of their qualities.
Early auditory processing in musicians and dancers during a contemporary dance piece
Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari
2016-01-01
The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other. PMID:27611929
Four principles of bio-musicology
Fitch, W. Tecumseh
2015-01-01
As a species-typical trait of Homo sapiens, musicality represents a cognitively complex and biologically grounded capacity worthy of intensive empirical investigation. Four principles are suggested here as prerequisites for a successful future discipline of bio-musicology. These involve adopting: (i) a multicomponent approach which recognizes that musicality is built upon a suite of interconnected capacities, of which none is primary; (ii) a pluralistic Tinbergian perspective that addresses and places equal weight on questions of mechanism, ontogeny, phylogeny and function; (iii) a comparative approach, which seeks and investigates animal homologues or analogues of specific components of musicality, wherever they can be found; and (iv) an ecologically motivated perspective, which recognizes the need to study widespread musical behaviours across a range of human cultures (and not focus solely on Western art music or skilled musicians). Given their pervasiveness, dance and music created for dancing should be considered central subcomponents of music, as should folk tunes, work songs, lullabies and children's songs. Although the precise breakdown of capacities required by the multicomponent approach remains open to debate, and different breakdowns may be appropriate to different purposes, I highlight four core components of human musicality—song, drumming, social synchronization and dance—as widespread and pervasive human abilities spanning across cultures, ages and levels of expertise. Each of these has interesting parallels in the animal kingdom (often analogies but in some cases apparent homologies also). Finally, I suggest that the search for universal capacities underlying human musicality, neglected for many years, should be renewed. The broad framework presented here illustrates the potential for a future discipline of bio-musicology as a rich field for interdisciplinary and comparative research. PMID:25646514
James, Clara E; Cereghetti, Donato M; Roullet Tribes, Elodie; Oechslin, Mathias S
2015-01-01
The majority of studies on music processing in children used simple musical stimuli. Here, primary schoolchildren judged the appropriateness of musical closure in expressive polyphone music, while high-density electroencephalography was recorded. Stimuli ended either regularly or contained refined in-key harmonic transgressions at closure. The children discriminated the transgressions well above chance. Regular and transgressed endings evoked opposite scalp voltage configurations peaking around 400ms after stimulus onset with bilateral frontal negativity for regular and centro-posterior negativity (CPN) for transgressed endings. A positive correlation could be established between strength of the CPN response and rater sensitivity (d-prime). We also investigated whether the capacity to discriminate the transgressions was supported by auditory domain specific or general cognitive mechanisms, and found that working memory capacity predicted transgression discrimination. Latency and distribution of the CPN are reminiscent of the N400, typically observed in response to semantic incongruities in language. Therefore our observation is intriguing, as the CPN occurred here within an intra-musical context, without any symbols referring to the external world. Moreover, the harmonic in-key transgressions that we implemented may be considered syntactical as they transgress structural rules. Such structural incongruities in music are typically followed by an early right anterior negativity (ERAN) and an N5, but not so here. Putative contributive sources of the CPN were localized in left pre-motor, mid-posterior cingulate and superior parietal regions of the brain that can be linked to integration processing. These results suggest that, at least in children, processing of syntax and meaning may coincide in complex intra-musical contexts. Copyright © 2014 Elsevier Inc. All rights reserved.
Polston, J.E.; Pritchett, C.E.; Sell, E.M.; Glick, S.D.
2012-01-01
Numerous studies utilizing drug self-administration have shown the importance of conditioned cues in maintaining and reinstating addictive behaviors. However, most used simple cues that fail to replicate the complexity of cues present in human craving and addiction. We have recently shown that music can induce behavioral and neurochemical changes in rats following classical conditioning with psychostimulants. However, such effects have yet to be characterized utilizing operant self-administration procedures, particularly with regard to craving and relapse. The goal of the present study was to validate the effectiveness of music as a contextual conditioned stimulus using cocaine in an operant reinstatement model of relapse. Rats were trained to lever press for cocaine with a musical cue, and were subsequently tested during reinstatement sessions to determine how musical conditioning affected drug seeking behavior. Additionally, in vivo microdialysis was used to determine basolateral amygdala involvement during reinstatement. Lastly, tests were conducted to determine whether the putative anti-addictive agent 18-methoxycoronaridine (18-MC) could attenuate cue-induced drug seeking behavior. Our results show that music-conditioned animals exhibited increased drug seeking behaviors when compared to controls during reinstatement test sessions. Furthermore, music-conditioned subjects exhibited increased extracellular dopamine in the basolateral amygdala during reinstatement sessions. Perhaps most importantly, 18-MC blocked musical cue-induced reinstatement. Thus, music can be a powerful contextual conditioned cue in rats, capable of inducing changes in both brain neurochemistry and drug seeking behavior during abstinence. The fact that 18-MC blocked cue-induced reinstatement suggests that α3β4 nicotinic receptors may be involved in the mechanism of craving, and that 18-MC may help prevent relapse to drug addiction in humans. PMID:22885280
The amusic brain: in tune, out of key, and unaware.
Peretz, Isabelle; Brattico, Elvira; Järvenpää, Miika; Tervaniemi, Mari
2009-05-01
Like language, music engagement is universal, complex and present early in life. However, approximately 4% of the general population experiences a lifelong deficit in music perception that cannot be explained by hearing loss, brain damage, intellectual deficiencies or lack of exposure. This musical disorder, commonly known as tone-deafness and now termed congenital amusia, affects mostly the melodic pitch dimension. Congenital amusia is hereditary and is associated with abnormal grey and white matter in the auditory cortex and the inferior frontal cortex. In order to relate these anatomical anomalies to the behavioural expression of the disorder, we measured the electrical brain activity of amusic subjects and matched controls while they monitored melodies for the presence of pitch anomalies. Contrary to current reports, we show that the amusic brain can track quarter-tone pitch differences, exhibiting an early right-lateralized negative brain response. This suggests near-normal neural processing of musical pitch incongruities in congenital amusia. It is important because it reveals that the amusic brain is equipped with the essential neural circuitry to perceive fine-grained pitch differences. What distinguishes the amusic from the normal brain is the limited awareness of this ability and the lack of responsiveness to the semitone changes that violate musical keys. These findings suggest that, in the amusic brain, the neural pitch representation cannot make contact with musical pitch knowledge along the auditory-frontal neural pathway.
Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter
2014-08-13
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.
Tools and Methods for Risk Management in Multi-Site Engineering Projects
NASA Astrophysics Data System (ADS)
Zhou, Mingwei; Nemes, Laszlo; Reidsema, Carl; Ahmed, Ammar; Kayis, Berman
In today's highly global business environment, engineering and manufacturing projects often involve two or more geographically dispersed units or departments, research centers or companies. This paper attempts to identify the requirements for risk management in a multi-site engineering project environment, and presents a review of the state-of-the-art tools and methods that can be used to manage risks in multi-site engineering projects. This leads to the development of a risk management roadmap, which will underpin the design and implementation of an intelligent risk mapping system.
Borek, Weronika E.; Groocock, Lynda M.; Samejima, Itaru; Zou, Juan; de Lima Alves, Flavia; Rappsilber, Juri; Sawin, Kenneth E.
2015-01-01
Microtubule nucleation is highly regulated during the eukaryotic cell cycle, but the underlying molecular mechanisms are largely unknown. During mitosis in fission yeast Schizosaccharomyces pombe, cytoplasmic microtubule nucleation ceases simultaneously with intranuclear mitotic spindle assembly. Cytoplasmic nucleation depends on the Mto1/2 complex, which binds and activates the γ-tubulin complex and also recruits the γ-tubulin complex to both centrosomal (spindle pole body) and non-centrosomal sites. Here we show that the Mto1/2 complex disassembles during mitosis, coincident with hyperphosphorylation of Mto2 protein. By mapping and mutating multiple Mto2 phosphorylation sites, we generate mto2-phosphomutant strains with enhanced Mto1/2 complex stability, interaction with the γ-tubulin complex and microtubule nucleation activity. A mutant with 24 phosphorylation sites mutated to alanine, mto2[24A], retains interphase-like behaviour even in mitotic cells. This provides a molecular-level understanding of how phosphorylation ‘switches off' microtubule nucleation complexes during the cell cycle and, more broadly, illuminates mechanisms regulating non-centrosomal microtubule nucleation. PMID:26243668
Neural network retuning and neural predictors of learning success associated with cello training.
Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J
2018-06-26
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.
Effects of music engagement on responses to painful stimulation.
Bradshaw, David H; Chapman, C Richard; Jacobson, Robert C; Donaldson, Gary W
2012-06-01
We propose a theoretical framework for the behavioral modulation of pain based on constructivism, positing that task engagement, such as listening for errors in a musical passage, can establish a construction of reality that effectively replaces pain as a competing construction. Graded engagement produces graded reductions in pain as indicated by reduced psychophysiological arousal and subjective pain report. Fifty-three healthy volunteers having normal hearing participated in 4 music listening conditions consisting of passive listening (no task) or performing an error detection task varying in signal complexity and task difficulty. During all conditions, participants received normally painful fingertip shocks varying in intensity while stimulus-evoked potentials (SEP), pupil dilation responses (PDR), and retrospective pain reports were obtained. SEP and PDR increased with increasing stimulus intensity. Task performance decreased with increasing task difficulty. Mixed model analyses, adjusted for habituation/sensitization and repeated measures within person, revealed significant quadratic trends for SEP and pain report (Pchange<0.001) with large reductions from no task to easy task and smaller graded reductions corresponding to increasing task difficulty/complexity. PDR decreased linearly (Pchange<0.001) with graded task condition. We infer that these graded reductions in indicators of central and peripheral arousal and in reported pain correspond to graded increases in engagement in the music listening task. Engaging activities may prevent pain by creating competing constructions of reality that draw on the same processing resources as pain. Better understanding of these processes will advance the development of more effective pain modulation through improved manipulation of engagement strategies.
Auditory Scene Analysis: The Sweet Music of Ambiguity
Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A.
2011-01-01
In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701
Responses to music and movement in the development of children with Down's syndrome.
Stratford, B; Ching, E Y
1989-02-01
Physical responses to rhythmic stimuli and music, of different degrees of complexity were registered from 25 children with Down's syndrome and 25 other mentally handicapped children. Required performances were taught and then recorded on video-tape, after which they were assessed by experienced teacher/judges. Whilst there were no overall significant differences between the groups, important differences were detected between the children in different schools with attendant implications for differential treatment. Apart from an overall and general assessment of performance, analysis was made of demographic variables, for example, sex, intelligence, age and social development. It is concluded that specific teaching approaches can significantly effect the development of children with Down's syndrome in such creative aspects of the curriculum as music, movement and dance.
Generaal, Ellen; Vogelzangs, Nicole; Macfarlane, Gary J; Geenen, Rinie; Smit, Johannes H; de Geus, Eco J C N; Dekker, Joost; Penninx, Brenda W J H
2017-02-01
Dysfunction of biological stress systems and adverse life events, independently and in interaction, have been hypothesized to predict chronic pain persistence. Conversely, these factors may hamper the improvement of chronic pain. Longitudinal evidence is currently lacking. We examined whether: 1) function of biological stress systems, 2) adverse life events, and 3) their combination predict the improvement of chronic multisite musculoskeletal pain. Subjects of the Netherlands Study of Depression and Anxiety (NESDA) with chronic multisite musculoskeletal pain at baseline (N = 665) were followed-up 2, 4, and 6 years later. The Chronic Pain Grade Questionnaire was used to determine improvement (not meeting the criteria) of chronic multisite musculoskeletal pain at follow-up. Baseline assessment of biological stress systems included function of hypothalamic-pituitary-adrenal axis (1-hour cortisol awakening response, evening level, and post dexamethasone level), the immune system (basal and lipopolysaccharide-stimulated inflammatory markers), the autonomic nervous system (heart rate, pre-ejection period, SD of the normal-to-normal interval, and respiratory sinus arrhythmia). The number of adverse life events were assessed at baseline and 2-year follow-up using the List of Threatening Events Questionnaire. We showed that hypothalamic-pituitary-adrenal axis, immune system, and autonomic nervous system functioning and adverse life events were not associated with the improvement of chronic multisite musculoskeletal pain, either as a main effect or in interaction. This longitudinal study could not confirm that biological stress system dysfunction and adverse life events affect the course of chronic multisite musculoskeletal pain. Biological stress systems and adverse life events are not associated with the improvement of chronic multisite musculoskeletal pain over 6 years of follow-up. Other determinants should thus be considered in future research to identify in which persons pain symptoms will improve. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.