A Proposal of a Color Music Notation System on a Single Melody for Music Beginners
ERIC Educational Resources Information Center
Kuo, Yi-Ting; Chuang, Ming-Chuen
2013-01-01
Music teachers often encounter obstructions in teaching beginners in music reading. Conventional notational symbols require beginners to spend significant amount of time in memorizing, which discourages learning at early stage. This article proposes a newly-developed color music notation system that may improve the recognition of the staff and the…
The role of line junctions in object recognition: The case of reading musical notation.
Wong, Yetta Kwailing; Wong, Alan C-N
2018-04-30
Previous work has shown that line junctions are informative features for visual perception of objects, letters, and words. However, the sources of such sensitivity and their generalizability to other object categories are largely unclear. We addressed these questions by studying perceptual expertise in reading musical notation, a domain in which individuals with different levels of expertise are readily available. We observed that removing line junctions created by the contact between musical notes and staff lines selectively impaired recognition performance in experts and intermediate readers, but not in novices. The degree of performance impairment was predicted by individual fluency in reading musical notation. Our findings suggest that line junctions provide diagnostic information about object identity across various categories, including musical notation. However, human sensitivity to line junctions does not readily transfer from familiar to unfamiliar object categories, and has to be acquired through perceptual experience with the specific objects.
Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.
Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel
2014-08-01
Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.
Exploring the association between visual perception abilities and reading of musical notation.
Lee, Horng-Yih
2012-06-01
In the reading of music, the acquisition of pitch information depends primarily upon the spatial position of notes as well as upon an individual's spatial processing ability. This study investigated the relationship between the ability to read single notes and visual-spatial ability. Participants with high and low single-note reading abilities were differentiated based upon differences in musical notation-reading abilities and their spatial processing; object recognition abilities were then assessed. It was found that the group with lower note-reading abilities made more errors than did the group with a higher note-reading abilities in the mental rotation task. In contrast, there was no apparent significant difference between the two groups in the object recognition task. These results suggest that note-reading may be related to visual spatial processing abilities, and not to an individual's ability with object recognition.
The Mental Representation of Music Notation: Notational Audiation
ERIC Educational Resources Information Center
Brodsky, Warren; Kessler, Yoav; Rubinstein, Bat-Sheva; Ginsborg, Jane; Henik, Avishai
2008-01-01
This study investigated the mental representation of music notation. Notational audiation is the ability to internally "hear" the music one is reading before physically hearing it performed on an instrument. In earlier studies, the authors claimed that this process engages music imagery contingent on subvocal silent singing. This study refines the…
ERIC Educational Resources Information Center
Skapski, George J.
As an innovative aid to the study of music, recordings were made of musical performances and later synchronized with musical notations. To make the structures of the music more readily visible, and after experimenting with the use of staff notation, the author-developed "Nota-Graph" notation system was used. In this notation, there are…
Music-Notation Searching and Digital Libraries.
ERIC Educational Resources Information Center
Byrd, Donald
Almost all work on music information retrieval to date has concentrated on music in the audio and event (normally MIDI) domains. However, music in the form of notation, especially Conventional Music Notation (CMN), is of much interest to musically trained persons, both amateurs and professionals, and searching CMN has great value for digital music…
Writing about Music: The Selection and Arrangement of Notation in Jazz Students' Written Texts
ERIC Educational Resources Information Center
Martin, Jodie L.
2018-01-01
Music notation is intrinsic in the composition and performance of Western art music and also in its analysis and research. The process of writing about music remains underexplored, in particular how excerpts of music notation are selected and arranged in a written text, and how that text describes and contextualises the excerpts. This article…
Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen
2018-07-01
Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.
Music-reading training alleviates crowding with musical notation.
Wong, Yetta Kwailing; Wong, Alan C-N
2016-06-01
Crowding refers to the disrupted recognition of an object by nearby distractors. Prior work has shown that real-world music-reading experts experience reduced crowding specifically for musical stimuli. However, it is unclear whether music-reading training reduced the magnitude of crowding or whether individuals showing less crowding are more likely to learn and excel in music reading later. To examine the first possibility, we tested whether crowding can be alleviated by music-reading training in the laboratory. Intermediate-level music readers completed 8 hr of music-reading training within 2 weeks. Their threshold duration for reading musical notes dropped by 44.1% after training to a level comparable with that of extant expert music readers. Importantly, crowding was reduced with musical stimuli but not with the nonmusical stimuli Landolt Cs. In sum, the reduced crowding for musical stimuli in expert music readers can be explained by music-reading training.
Linking Different Cultures by Computers: A Study of Computer-Assisted Music Notation Instruction.
ERIC Educational Resources Information Center
Chen, Steve Shihong; Dennis, J. Richard
1993-01-01
Describes a study that investigated the feasibility of using computers to teach music notation systems to Chinese students, as well as to help Western educators study Chinese music and its number notation system. Topics discussed include students' learning sequences; HyperCard software; hypermedia and graphic hypertext indexing; and the…
Using Graphical Notations to Assess Children's Experiencing of Simple and Complex Musical Fragments
ERIC Educational Resources Information Center
Verschaffel, Lieven; Reybrouck, Mark; Janssens, Marjan; Van Dooren, Wim
2010-01-01
The aim of this study was to analyze children's graphical notations as external representations of their experiencing when listening to simple sonic stimuli and complex musical fragments. More specifically, we assessed the impact of four factors on children's notations: age, musical background, complexity of the fragment, and most salient…
Teaching Movable "Du": Guidelines for Developing Enrhythmic Reading Skills
ERIC Educational Resources Information Center
Dalby, Bruce
2015-01-01
Reading music notation with fluency is a complex skill requiring well-founded instruction by the music teacher and diligent practice on the part of the learner. The task is complicated by the fact that there are multiple ways to notate a given rhythm. Beginning music students typically have their first encounter with enrhythmic notation when they…
Effect of Color-Coded Notation on Music Achievement of Elementary Instrumental Students.
ERIC Educational Resources Information Center
Rogers, George L.
1991-01-01
Presents results of a study of color-coded notation to teach music reading to instrumental students. Finds no clear evidence that color-coded notation enhances achievement on performing by memory, sight-reading, or note naming. Suggests that some students depended on the color-coding and were unable to read uncolored notation well. (DK)
The Moon System Adapted for Musical Notation.
ERIC Educational Resources Information Center
Jackson, Michael
1987-01-01
A means is presented for using William Moon's embossed symbols to represent musical notation for blind individuals, as an alternative to braille notation. The proposed system includes pitch symbols, octave indicators, duration symbols, accidentals, key signatures, rests, stress symbols, ornaments, and other symbols. (Author/JDD)
Cognitive Load Theory and Music Instruction
ERIC Educational Resources Information Center
Owens, Paul; Sweller, John
2008-01-01
In two experiments, the principles of cognitive load theory were applied to the design of alternatives to conventional music instruction hypothesised to facilitate learning. Experiment 1 demonstrated that spatial integration of visual text and musical notation, and dual-modal delivery of auditory text and musical notation, were superior to the…
African Oral Tradition Literacy.
ERIC Educational Resources Information Center
Green, Doris
1985-01-01
Presents the basic principles of two systems for notating African music and dance: Labanotation (created to record and analyze movements) and Greenotation (created to notate musical instruments of Africa and to parallel Labanotation whereby both music and dance are incorporated into one integrated score). (KH)
Using music[al] knowledge to represent expressions of emotions.
Alexander, Stewart C; Garner, David Kirkland; Somoroff, Matthew; Gramling, David J; Norton, Sally A; Gramling, Robert
2015-11-01
Being able to identify expressions of emotion is crucial to effective clinical communication research. However, traditional linguistic coding systems often cannot represent emotions that are expressed nonlexically or phonologically (i.e., not through words themselves but through vocal pitch, speed/rhythm/tempo, and volume). Using audio recording of a palliative care consultation in the natural hospital setting, two experienced music scholars employed Western musical notation, as well as the graphic realization of a digital audio program (Piano roll visualization), to visually represent the sonic features of conversation where a patient has an emotional "choke" moment. Western musical notation showed the ways that changes in pitch and rate correspond to the patient's emotion: rising sharply in intensity before slowly fading away. Piano roll visualization is a helpful supplement. Using musical notation to illustrate palliative care conversations in the hospital setting can render visible for analysis several aspects of emotional expression that researchers otherwise experience as intuitive or subjective. Various forms and formats of musical notation techniques and sonic visualization technologies should be considered as fruitful and complementary alternatives to traditional coding tools in clinical communications research. Musical notation offers opportunity for both researchers and learners to "see" how communication evolves in clinical encounters, particularly where the lexical and phonological features of interpersonal communication are concordant and discordant with one another. Copyright © 2015. Published by Elsevier Ireland Ltd.
Hallucinations of musical notation.
Sacks, Oliver
2013-07-01
Hallucinations of musical notation may occur in a variety of conditions, including Charles Bonnet syndrome, Parkinson's disease, fever, intoxications, hypnagogic and hypnopompic states. Eight cases are described here, and their possible cerebral mechanisms discussed.
Effects of Music Notation Reinforcement on Aural Memory for Melodies
ERIC Educational Resources Information Center
Buonviri, Nathan
2015-01-01
The purpose of this study was to investigate effects of music notation reinforcement on aural memory for melodies. Participants were 41 undergraduate and graduate music majors in a within-subjects design. Experimental trials tested melodic memory through a sequence of target melodies, distraction melodies, and matched and unmatched answer choices.…
A Multimodal Neural Network Recruited by Expertise with Musical Notation
ERIC Educational Resources Information Center
Wong, Yetta Kwailing; Gauthier, Isabel
2010-01-01
Prior neuroimaging work on visual perceptual expertise has focused on changes in the visual system, ignoring possible effects of acquiring expert visual skills in nonvisual areas. We investigated expertise for reading musical notation, a skill likely to be associated with multimodal abilities. We compared brain activity in music-reading experts…
Guitar Scales in Music Notation and Tablature Diagrams.
ERIC Educational Resources Information Center
Hammer, Petra
This study guide was designed to help high school students learn the basic skills in classical guitar playing, technique, fingerboard knowledge, and musicianship. The introduction describes how to read the music notation that is presented in traditional music form and also in tablature diagrams showing finger positioning in the guitar neck.…
ERIC Educational Resources Information Center
Elkoshi, Rivka
2007-01-01
Facing the ambiguous status of in-school music literacy, this follow-up eight-year study aims to touch on the effects of traditional staff notation (SN) learning on student's intuitive symbolizing behavior and musical perception. Subjects were 47 second-graders attending a religious Jewish school in Israel. One "pre-literate" meeting, in…
Preserving Musicality through Pictures: A Linguistic Pathway to Conventional Notation
ERIC Educational Resources Information Center
Nordquist, Alice L.
2016-01-01
The natural musicality so often present in children's singing can begin to fade as the focus of a lesson shifts to the process of reading and writing conventional notation symbols. Approaching the study of music from a linguistic perspective preserves the pace and flow that is inherent in spoken language and song. SongWorks teaching practices…
Position coding effects in a 2D scenario: the case of musical notation.
Perea, Manuel; García-Chamorro, Cristina; Centelles, Arnau; Jiménez, María
2013-07-01
How does the cognitive system encode the location of objects in a visual scene? In the past decade, this question has attracted much attention in the field of visual-word recognition (e.g., "jugde" is perceptually very close to "judge"). Letter transposition effects have been explained in terms of perceptual uncertainty or shared "open bigrams". In the present study, we focus on note position coding in music reading (i.e., a 2D scenario). The usual way to display music is the staff (i.e., a set of 5 horizontal lines and their resultant 4 spaces). When reading musical notation, it is critical to identify not only each note (temporal duration), but also its pitch (y-axis) and its temporal sequence (x-axis). To examine note position coding, we employed a same-different task in which two briefly and consecutively presented staves contained four notes. The experiment was conducted with experts (musicians) and non-experts (non-musicians). For the "different" trials, the critical conditions involved staves in which two internal notes that were switched vertically, horizontally, or fully transposed--as well as the appropriate control conditions. Results revealed that note position coding was only approximate at the early stages of processing and that this encoding process was modulated by expertise. We examine the implications of these findings for models of object position encoding. Copyright © 2013 Elsevier B.V. All rights reserved.
Alternatives to Traditional Notation.
ERIC Educational Resources Information Center
Gaare, Mark
1997-01-01
Provides a introduction and overview to alternative music notation systems. Describes guitar tablature, accordion tablature, klavarskribo (a keyboard notational system developed by Cornelius Pot, a Dutch engineer), and the digital piano roll. Briefly discusses the history of notation reform and current efforts. Includes examples from scores. (MJP)
Music, Mechanism, and the “Sonic Turn” in Physical Diagnosis
Pesic, Peter
2016-01-01
The sonic diagnostic techniques of percussion and mediate auscultation advocated by Leopold von Auenbrugger and R. T. H. Laennec developed within larger musical contexts of practice, notation, and epistemology. Earlier, François-Nicolas Marquet proposed a musical notation of pulse that connected felt pulsation with heard music. Though contemporary vitalists rejected Marquet's work, mechanists such as Albrecht von Haller included it into the larger discourse about the physiological manifestations of bodily fluids and fibers. Educated in that mechanistic physiology, Auenbrugger used musical vocabulary to present his work on thoracic percussion; Laennec's musical experience shaped his exploration of the new timbres involved in mediate auscultation. PMID:26349757
Perceptions of Schooling, Pedagogy and Notation in the Lives of Visually-Impaired Musicians
ERIC Educational Resources Information Center
Baker, David; Green, Lucy
2016-01-01
This article discusses findings on schooling, pedagogy and notation in the life-experiences of amateur and professional visually-impaired musicians/music teachers, and the professional experiences of sighted music teachers who work with visually-impaired learners. The study formed part of a broader UK Arts and Humanities Research Council funded…
Recombinative Generalization: An Exploratory Study in Musical Reading
Perez, William Ferreira; de Rose, Julio C
2010-01-01
The present study aimed to extend the findings of recombinative generalization research in alphabetical reading and spelling to the context of musical reading. One participant was taught to respond discriminatively to six two-note sequences, choosing the corresponding notation on the staff in the presence of each sequence. When novel three- and four-note sequences were presented, she selected the corresponding notation. These results suggest the generality of previous research to the context of musical teaching. PMID:22477462
Children's Invented Notations and Verbal Responses to a Piano Work by Claude Debussy
ERIC Educational Resources Information Center
Elkoshi, Rivka
2015-01-01
This study considers the way children listen to classical music composed for them and the effect of age on their spontaneous invented notations and verbal responses. The musical selection is a piano piece for children by Claude Debussy:"'Jimbo's Lullaby" from "Children's Corner". Two hundred and nine children 4-9.5-years-old…
Music, Mechanism, and the "Sonic Turn" in Physical Diagnosis.
Pesic, Peter
2016-04-01
The sonic diagnostic techniques of percussion and mediate auscultation advocated by Leopold von Auenbrugger and R. T. H. Laennec developed within larger musical contexts of practice, notation, and epistemology. Earlier, François-Nicolas Marquet proposed a musical notation of pulse that connected felt pulsation with heard music. Though contemporary vitalists rejected Marquet's work, mechanists such as Albrecht von Haller included it into the larger discourse about the physiological manifestations of bodily fluids and fibers. Educated in that mechanistic physiology, Auenbrugger used musical vocabulary to present his work on thoracic percussion; Laennec's musical experience shaped his exploration of the new timbres involved in mediate auscultation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
How One Class with One Computer Composed Music
ERIC Educational Resources Information Center
Siegel, Jack
2004-01-01
Music composition is a rewarding activity for students. Through composition, teachers not only address National Standard 4 (composing and arranging music within specified guidelines), but also cover other areas of the music curriculum such as singing, notation, improvisation, form, style, tempo, dynamics, music vocabulary, and assessment. During…
Speechant: A Vowel Notation System to Teach English Pronunciation
ERIC Educational Resources Information Center
dos Reis, Jorge; Hazan, Valerie
2012-01-01
This paper introduces a new vowel notation system aimed at aiding the teaching of English pronunciation. This notation system, designed as an enhancement to orthographic text, was designed to use concepts borrowed from the representation of musical notes and is also linked to the acoustic characteristics of vowel sounds. Vowel timbre is…
Advances in Music-Reading Research
ERIC Educational Resources Information Center
Gudmundsdottir, Helga Rut
2010-01-01
The purpose of this paper is to construct a comprehensive review of the research literature in the reading of western staff notation. Studies in music perception, music cognition, music education and music neurology are cited. The aim is to establish current knowledge in music-reading acquisition and what is needed for further progress in this…
Music, Technology, and an Evolving Curriculum.
ERIC Educational Resources Information Center
Moore, Brian
1992-01-01
Mechanical examples of musical technology, like the Steinway piano, are well known and accepted. Use of computers and electronic technology is the next logical step in developing art of music. MIDI (Musical Instrument Digital Interface) is explained, along with digital devices (such as synthesizers, sequencers, music notation software, multimedia,…
Publishing and Journalism Careers
ERIC Educational Resources Information Center
Reed, Alfred; And Others
1977-01-01
If you like to work with words and notational symbols--or with describing, selecting, managing, and distributing the words and music of other people--then journalism or publishing as a whole may be your bailiwick. Describes the positions of music editor, music publisher, magazine/book editor, music critic, and freelance music writer. (Editor/RK)
ERIC Educational Resources Information Center
Marshall, Herbert D.
2006-01-01
The article offers tips on introducing percussion activities in elementary music class. Percussion equipment should be treated as musical instruments and not toys, teaching correct names, playing techniques and notation for the instruments. Active listening experiences for students should be planned, including band music. Band music incorporates…
Interactive Courseware Standards
1992-07-01
music industry standard provides data formats and transmission specifications for musical notation. Joint Photographic Experts Group (JPEG). This...has been used in the music industry for several years, especially for electronically programmable keyboards and 16 instruments. The video compression
Music Learning: Greater than the Sum of Its Parts.
ERIC Educational Resources Information Center
Zentz, Donald M.
1992-01-01
Discusses that Gestalt principles are especially well suited to teaching music. Identifies the laws of proximity, similarity, common direction, and simplicity in the notation system. Suggests that music teachers use these principles by following a logical progression to teach students to improve musical skills, solve problems, and think in…
Learning from Looking at Sound: Using Multimedia Spectrograms to Explore World Music
ERIC Educational Resources Information Center
Thibeault, Matthew D.
2011-01-01
This article details the use of multimedia spectrogram displays for visualizing and understanding music. A section on foundational considerations presents similarities and differences between Western musical scores and spectrograms, in particular the benefit in avoiding Western notation when using music from a culture where representation through…
Playing by Ear: Foundation or Frill?
ERIC Educational Resources Information Center
Woody, Robert H.
2012-01-01
Many people divide musicians into two types: those who can read music and those who play by ear. Formal music education tends to place great emphasis on producing musically literate performers but devotes much less attention to teaching students to make music without notation. Some would suggest that playing by ear is a specialized skill that is…
Junior High Instrumental Music: Wind-Percussion Strings. [Curriculum Guide.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton. Curriculum Design Branch.
This curriculum guide outlines a secondary music program for Alberta, Canada, that aims: (1) to develop skills in listening, performing, and using notation; (2) to encourage students to strive for musical excellence; (3) to enable students to appreciate music; (4) to foster self-expression and creativity; and (5) to make students aware of the…
Kodaly, Literacy, and the Brain: Preparing Young Music Students to Read Pitch on the Staff
ERIC Educational Resources Information Center
Jacobi, Bonnie S.
2012-01-01
The principles of Hungarian music educator Zoltan Kodaly can be particularly useful not only in teaching children how to read music notation but also in creating curiosity and enjoyment for reading music. Many of Kodaly's ideas pertaining to music literacy have been echoed by educators such as Jerome Bruner and Edwin Gordon, as well as current…
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improving Music Skills of Elementary Students with Notation-Reading and Sight-Singing.
ERIC Educational Resources Information Center
Harding, Mary H.
A music educator designed for elementary school students who were musically unskilled a curriculum that was based on the methods of Kodaly and Orff, the philosophy of Warrener, and traditional music education concepts. A heterogeneous group of 606 second- through sixth-grade students in 4 schools participated in implementation of the curriculum.…
Academic Music: Music Instruction to Engage Third-Grade Students in Learning Basic Fraction Concepts
ERIC Educational Resources Information Center
Courey, Susan Joan; Balogh, Endre; Siker, Jody Rebecca; Paik, Jae
2012-01-01
This study examined the effects of an academic music intervention on conceptual understanding of music notation, fraction symbols, fraction size, and equivalency of third graders from a multicultural, mixed socio-economic public school setting. Students (N = 67) were assigned by class to their general education mathematics program or to receive…
Music: A Bridge between Two Cultures
ERIC Educational Resources Information Center
Espinosa, Alma
2007-01-01
Certain aspects of European art music occupy a middle ground between the two cultures described by C. P. Snow almost fifty years ago. Analogies exist not only between mathematics and the ratios underlying musical notation and intervals (i.e., the distance between pitches) but also between computer science and counterpoint (simultaneous melodies):…
Understanding Charts and Graphs.
1987-07-28
34notational.* English, then, is obviously not a notational system because ambiguous words or sentences are possible, whereas musical notion is notational...how lines and regions are detected and organized; these principles grow out of discoveries about human visual information processing. A syntactic...themselves name other colors (e.g., the word "red" is printed in blue ink; this is known as the OStroop effecto ). Similarly, if "left" and "right" are
Visual processing of music notation: a study of event-related potentials.
Lee, Horng-Yih; Wang, Yu-Sin
2011-04-01
In reading music, the acquisition of pitch information depends mostly on the spatial position of notes, hence more spatial processing, whereas the acquisition of temporal information depends mostly on the visual features of notes and object recognition. This study used both electrophysiological and behavioral methods to compare the processing of pitch and duration in reading single musical notes. It was observed that in the early stage of note reading, identification of pitch could elicit greater N1 and N2 amplitude than identification of duration at the parietal lobe electrodes. In the later stages of note reading, identifying pitch elicited a greater negative slow wave at parietal electrodes than did identifying note duration. The sustained contribution of parietal processes for pitch suggests that the dorsal pathway is essential for pitch processing. However, the duration task did not elicit greater amplitude of any early ERP components than the pitch task at temporal electrodes. Accordingly, a double dissociation, suggesting involvement of the dorsal visual stream, was not observed in spatial pitch processing and ventral visual stream in processing of note durations.
ERIC Educational Resources Information Center
Music Educators National Conference, Reston, VA.
This symposium focused principally on a transcultural approach to music teaching and learning. After an introductory chapter, contents (1) compare the music and dance of the Hawaiian and Hopi peoples; (2) explore the role of the music teacher in multi-cultural societies; (3) present a pictorial notation designed for the transmission of traditional…
Wong, Yetta Kwailing; Gauthier, Isabel
2010-12-01
Holistic processing (i.e., the tendency to process objects as wholes) is associated with face perception and also with expertise individuating novel objects. Surprisingly, recent work also reveals holistic effects in novice observers. It is unclear whether the same mechanisms support holistic effects in experts and in novices. In the present study, we measured holistic processing of music sequences using a selective attention task in participants who vary in music-reading expertise. We found that holistic effects were strategic in novices but were relatively automatic in experts. Correlational analyses revealed that individual holistic effects were predicted by both individual music-reading ability and neural responses for musical notation in the right fusiform face area (rFFA), but in opposite directions for experts and novices, suggesting that holistic effects in the two groups may be of different natures. To characterize expert perception, it is important not only to measure the tendency to process objects as wholes, but also to test whether this effect is dependent on task constraints.
Markov source model for printed music decoding
NASA Astrophysics Data System (ADS)
Kopec, Gary E.; Chou, Philip A.; Maltz, David A.
1995-03-01
This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.
ERIC Educational Resources Information Center
McCusker, Joan
A qualitative study was conducted in the winter of 2000 with children enrolled in a Clef Club, the fourth level of an early childhood music program sponsored by the Eastman School's Community Education Division (Rochester, NY). Eleven participants, ages 4.7 to 6.6, enrolled in 3 sections of the 10-week program taught by the researcher. Classroom…
Using equivalence-based instruction to teach piano skills to college students.
Griffith, Kristin R; Ramos, Amber L; Hill, Kelli E; Miguel, Caio F
2018-04-01
The purpose of the current study was to evaluate the effects of equivalence-based instruction (EBI) on the emergence of basic music reading and piano playing skills. Six female college students learned to identify three musical chord notations given their respective dictated names. Participants also learned to play chords on the piano following the dictated name of the chord, and to play the chords to a song on a keyboard. Results are consistent with past research, in that stimuli became substitutable for each other and acquired a common behavioral function. Data suggest that EBI was an effective and efficient procedure to teach adults to read musical notation, as well as play chords and a song on a piano keyboard. © 2018 Society for the Experimental Analysis of Behavior.
Outside the Framework of Thinkable Thought: The Modern Orchestration Project
ERIC Educational Resources Information Center
Gattegno, Eliot Aron
2010-01-01
In today's world of too much information, context--not content--is king. This proposal is for the development of an unparalleled sonic analysis tool that converts audio files into musical score notation and a Web site (API) to collect manage and preserve information about the musical sounds analyzed, as well as music scores, videos, and articles…
Navigating the Maze of Music Rights
ERIC Educational Resources Information Center
DuBoff, Leonard D.
2007-01-01
Music copyright is one of the most complex areas of intellectual property law. To begin with, there is a copyright in notated music and a copyright in accompanying lyrics. When the piece is performed, there is a copyright in the performance that is separate and apart from the copyright in the underlying work. If a sound recording is used in…
Music behind Scores: Case Study of Learning Improvisation with "Playback Orchestra" Method
ERIC Educational Resources Information Center
Juntunen, P.; Ruokonen, I.; Ruismäki, H.
2015-01-01
For music students in the early stages of learning, the music may seem to be hidden behind the scores. To support home practising, Juntunen has created the "Playback Orchestra" method with which the students can practise with the support of the notation program playback of the full orchestra. The results of testing the method with…
Analysis of musical expression in audio signals
NASA Astrophysics Data System (ADS)
Dixon, Simon
2003-01-01
In western art music, composers communicate their work to performers via a standard notation which specificies the musical pitches and relative timings of notes. This notation may also include some higher level information such as variations in the dynamics, tempo and timing. Famous performers are characterised by their expressive interpretation, the ability to convey structural and emotive information within the given framework. The majority of work on audio content analysis focusses on retrieving score-level information; this paper reports on the extraction of parameters describing the performance, a task which requires a much higher degree of accuracy. Two systems are presented: BeatRoot, an off-line beat tracking system which finds the times of musical beats and tracks changes in tempo throughout a performance, and the Performance Worm, a system which provides a real-time visualisation of the two most important expressive dimensions, tempo and dynamics. Both of these systems are being used to process data for a large-scale study of musical expression in classical and romantic piano performance, which uses artificial intelligence (machine learning) techniques to discover fundamental patterns or principles governing expressive performance.
ERIC Educational Resources Information Center
Higgins, William R.
1987-01-01
Reviews a dissertation in which the problems of real-time pitch detection by computer were studied in an attempt to develop a learning tool for sightsinging students. Specialized hardware and software were developed to discriminate aural pitches and to display them in real-time using standard notation. (BSR)
“Il flauto magico” still works: Mozart’s secret of ventilation
2013-01-01
Background Synchronisation/coupling between respiratory patterns and musical structure. Methods Healthy professional musicians and members of the audience were studied during a performance of W.A. Mozart’s Piano Concerto KV 449. Electrocardiogram (ECG)/Heart Rate Variability (HRV) data recording (Schiller: Medilog®AR12, ECG-channels: 3, sampling rate: 4096 Hz, 16 Bit) was carried out and a simultaneous synchronized high definition video/audio recording was made. The breathing-specific data were subsequently extracted using Electrocardiogram-derived respiration (EDR; Software: Schiller medilog®DARWIN) from the HRV data and overlaid at the same time onto the musical score using FINALE 2011 notation software and the GIMP 2.0 graphics programme. The musical score was graphically modified graphically so that the time code of the breathing signals coincided exactly with the notated musical elements. Thus a direct relationship could be produced between the musicians’ breathing activity and the musical texture. In parallel with the medical/technical analysis, a music analysis of the score was conducted with regard to the style and formal shaping of the composition. Results It was found that there are two archetypes of ideally typical breathing behaviour in professional musicians that either drive the musical creation, performance and experience or are driven by the musical structure itself. These archetypes also give rise to various states of synchronisation and regulation between performers, audience and the musical structure. Conclusions There are two archetypes of musically-induced breathing which not only represent the identity of music and human physiology but also offer new approaches for multidisciplinary respiratory medicine. PMID:23509946
Theory I: A Comprehensive Approach to Theory Through Ear Training, Music: 5636.18.
ERIC Educational Resources Information Center
Blum, Jesse
This Quinmester, 9-week course of study, is an aural approach to theory through sight singing and dictation techniques employing the moveable "Do" and number systems, interspersed with an outline approach to general music history. It is designed for students acquainted with staff notation and exposed to keyboard or to instrumental or…
Data sonification and sound visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wiebel, E.
1999-07-01
Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.
Barnes-Burroughs, Kathryn; Anderson, Edward E; Hughes, Thomas; Lan, William Y; Dent, Karl; Arnold, Sue; Dolter, Gerald; McNeil, Kathy
2007-11-01
The purpose of this investigation was to ascertain the pedagogical viability of computer-generated melodic contour mapping systems in the classical singing studio, as perceived by their resulting effect (if any) on vocal timbre when a singer's head and neck remained in a normal singing posture. The evaluation of data gathered during the course of the study indicates that the development of consistent vocal timbre produced by the classical singing student may be enhanced through visual/kinesthetic response to melodic contour inversion mapping, as it balances the singer's perception of melodic intervals in standard musical notation. Unexpectedly, it was discovered that the system, in its natural melodic contour mode, may also be useful for teaching a student to sing a consistent legato line. The results of the study also suggest that the continued development of this new technology for the general teaching studio, designed to address standard musical notation and a singer's visual/kinesthetic response to it, may indeed be useful.
A Musical Approach to Speech Melody
Chow, Ivan; Brown, Steven
2018-01-01
We present here a musical approach to speech melody, one that takes advantage of the intervallic precision made possible with musical notation. Current phonetic and phonological approaches to speech melody either assign localized pitch targets that impoverish the acoustic details of the pitch contours and/or merely highlight a few salient points of pitch change, ignoring all the rest of the syllables. We present here an alternative model using musical notation, which has the advantage of representing the pitch of all syllables in a sentence as well as permitting a specification of the intervallic excursions among syllables and the potential for group averaging of pitch use across speakers. We tested the validity of this approach by recording native speakers of Canadian English reading unfamiliar test items aloud, spanning from single words to full sentences containing multiple intonational phrases. The fundamental-frequency trajectories of the recorded items were converted from hertz into semitones, averaged across speakers, and transcribed into musical scores of relative pitch. Doing so allowed us to quantify local and global pitch-changes associated with declarative, imperative, and interrogative sentences, and to explore the melodic dynamics of these sentence types. Our basic observation is that speech is atonal. The use of a musical score ultimately has the potential to combine speech rhythm and melody into a unified representation of speech prosody, an important analytical feature that is not found in any current linguistic approach to prosody. PMID:29556206
Vos, P G; van Dijk, A; Schomaker, L
1994-01-01
A method of time-series analysis and a time-beating experiment were used to test the structural and perceptual validity of notated metre. Autocorrelation applied to the flow of melodic intervals between notes from thirty fragments of compositions for solo instruments by J S Bach strongly supported the validity of bar length specifications. Time-beating data, obtained with four stimuli from the same set, played in an expressionless mode, and presented under categorically distinct tempos to different subgroups of musically trained subjects, were rather inconsistent with respect to tapped bar lengths. However, taps were most frequently given to the events in the stimuli that corresponded with the first beats according to the score notations. No significant effects of tempo on tapping patterns were observed. The findings are discussed in comparison with other examinations of metre inference from musical compositions.
The psychoacoustics of musical articulation
NASA Astrophysics Data System (ADS)
Spiegelberg, Scott Charles
This dissertation develops psychoacoustical definitions of notated articulations, the necessary first step in articulation research. This research can be useful to theorists interested in timbre analysis, the psychology of performance, analysis and performance, the psychology of style differentiation, and performance pedagogy. An explanation of wavelet transforms precedes the development of new techniques for analyzing transient sounds. A history of timbre perception research reveals the inadequacies of current sound segmentation models, resulting in the creation of a new model, the Pitch/Amplitude/Centroid Trajectory (PACT) model of sound segmentation. The new analysis techniques and PACT model are used to analyze recordings of performers playing a melodic fragment in a series of notated articulations. Statistical tests showed that the performers generally agreed on the interpretation of five different articulation groups. A cognitive test of articulation similarity, using musicians and non-musicians as participants, revealed a close correlation between similarity judgments and physical attributes, though additional unknown factors are clearly present. A second psychological test explored the perceptual salience of articulation notation, by asking musically-trained participants to match stimuli to the same notations the performers used. The participants also marked verbal descriptors for each articulation, such as short/long, sharp/dull, loud/soft, harsh/gentle, and normal/extreme. These results were matched against the results of Chapters Five and Six, providing an overall interpretation of the psychoacoustics of articulation.
A Physicist's view on Chopin's Études
NASA Astrophysics Data System (ADS)
Blasone, Massimo
2017-07-01
We propose the use of specific dynamical processes and more in general of ideas from Physics to model the evolution in time of musical structures. We apply this approach to two Études by F. Chopin, namely Op.10 n.3 and Op.25 n.1, proposing some original description based on concepts of symmetry breaking/restoration and quantum coherence, which could be useful for interpretation. In this analysis, we take advantage of colored musical scores, obtained by implementing Scriabin's color code for sounds to musical notation.
DISCO: An object-oriented system for music composition and sound design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wright, J. M.
2000-09-05
This paper describes an object-oriented approach to music composition and sound design. The approach unifies the processes of music making and instrument building by using similar logic, objects, and procedures. The composition modules use an abstract representation of musical data, which can be easily mapped onto different synthesis languages or a traditionally notated score. An abstract base class is used to derive classes on different time scales. Objects can be related to act across time scales, as well as across an entire piece, and relationships between similar objects can replicate traditional music operations or introduce new ones. The DISCO (Digitalmore » Instrument for Sonification and Composition) system is an open-ended work in progress.« less
Auditory Imagery: Empirical Findings
ERIC Educational Resources Information Center
Hubbard, Timothy L.
2010-01-01
The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…
Chansirinukor, Wunpen; Khemthong, Supalak
2014-07-01
To compare psychomotor function between a music student group who had music education and a non-music student group who participated in music training. Consecutive sampling was used for completing questionnaires, testing reaction times (visual, auditory, and tactile system), measuring electromyography of upper trapezius muscles both sides and taking photos of the Craniovertebral (CV) angle in the sitting position. Data collection was made twice for each student group: the music students at one-hour intervals for resting and conducting nonmusic activities, the non-music students at two-day intervals, 20 minutes/session, and performed music training (by a manual of keyboard notation). The non-music students (n = 65) improved reaction times, but responded slower than the music students except for the tactile system. The music students (n = 28) showed faster reaction times and higher activities of the trapezius muscle than the non-music students at post-test. In addition, the CV angle of the non-music students was significantly improved. The level of musical ability may influence the psychomotor function. Significant improvement was observed in visual, auditory and tactile reaction time, and CV angle in the non-music students. However upper trapezius muscle activities between both student groups were unchanged.
The Education, Training, and Development of Dance Educators in Higher Education.
ERIC Educational Resources Information Center
Hayes, Elizabeth R.
1980-01-01
Standards should be established for professional dance curricula in higher education. Courses in dance history, dance philosophy, dance notation, music for dance, kinesiology as applied to dance, and dance theater design and production should be taught by a core of experts. (CJ)
Music and words in the visual cortex: The impact of musical expertise.
Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent
2017-01-01
How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.
Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music.
Lee, Minyoung; Blake, Randolph; Kim, Sujin; Kim, Chai-Youn
2015-07-07
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
Impaired recognition of scary music following unilateral temporal lobe excision.
Gosselin, Nathalie; Peretz, Isabelle; Noulhiane, Marion; Hasboun, Dominique; Beckett, Christine; Baulac, Michel; Samson, Séverine
2005-03-01
Music constitutes an ideal means to create a sense of suspense in films. However, there has been minimal investigation into the underlying cerebral organization for perceiving danger created by music. In comparison, the amygdala's role in recognition of fear in non-musical contexts has been well established. The present study sought to fill this gap in exploring how patients with amygdala resection recognize emotional expression in music. To this aim, we tested 16 patients with left (LTR; n = 8) or right (RTR; n = 8) medial temporal resection (including amygdala) for the relief of medically intractable seizures and 16 matched controls in an emotion recognition task involving instrumental music. The musical selections were purposely created to induce fear, peacefulness, happiness and sadness. Participants were asked to rate to what extent each musical passage expressed these four emotions on 10-point scales. In order to check for the presence of a perceptual problem, the same musical selections were presented to the participants in an error detection task. None of the patients was found to perform below controls in the perceptual task. In contrast, both LTR and RTR patients were found to be impaired in the recognition of scary music. Recognition of happy and sad music was normal. These findings suggest that the anteromedial temporal lobe (including the amygdala) plays a role in the recognition of danger in a musical context.
Impaired Emotion Recognition in Music in Parkinson's Disease
ERIC Educational Resources Information Center
van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.
2010-01-01
Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction…
Emotional memory for musical excerpts in young and older adults
Alonso, Irene; Dellacherie, Delphine; Samson, Séverine
2015-01-01
The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24 h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24 h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and dementia. PMID:25814950
ERIC Educational Resources Information Center
Falter, H. Ellie
2011-01-01
How do teachers teach students to count rhythms? Teachers can choose from various techniques. Younger students may learn themed words (such as "pea," "carrot," or "avocado"), specific rhythm syllables (such as "ta" and "ti-ti"), or some other counting method to learn notation and internalize rhythms. As students grow musically, and especially when…
Using Baroque Techniques to Teach Improvisation in Your Classroom
ERIC Educational Resources Information Center
Yoo, Hyesoo
2015-01-01
Before our current notation system was widely adopted by musicians, improvisation was a key component of music throughout the Western world. One of the fundamental elements of the baroque style, namely, using improvised embellishment, offered musicians great artist liberty. During the baroque period, improvisation spread across Europe and beyond.…
John Curwen: Teaching the Tonic Sol-Fa Method 1816-1880.
ERIC Educational Resources Information Center
Zinar, Ruth
1983-01-01
John Curwen made many contributions to music education. He taught singing through the sound of tones before students learned notation, originated a widely used system of hand signals for the tones of the scale, and emphasized a feeling for the basic beat underlying the durations of tones. (CS)
Music Learning in Your School Computer Lab.
ERIC Educational Resources Information Center
Reese, Sam
1998-01-01
States that a growing number of schools are installing general computer labs equipped to use notation, accompaniment, and sequencing software independent of MIDI keyboards. Discusses (1) how to configure the software without MIDI keyboards or external sound modules, (2) using the actual MIDI software, (3) inexpensive enhancements, and (4) the…
Allgood, Rebecca; Heaton, Pamela
2015-09-01
Although the configurations of psychoacoustic cues signalling emotions in human vocalizations and instrumental music are very similar, cross-domain links in recognition performance have yet to be studied developmentally. Two hundred and twenty 5- to 10-year-old children were asked to identify musical excerpts and vocalizations as happy, sad, or fearful. The results revealed age-related increases in overall recognition performance with significant correlations across vocal and musical conditions at all developmental stages. Recognition scores were greater for musical than vocal stimuli and were superior in females compared with males. These results confirm that recognition of emotions in vocal and musical stimuli is linked by 5 years and that sensitivity to emotions in auditory stimuli is influenced by age and gender. © 2015 The British Psychological Society.
Linking melodic expectation to expressive performance timing and perceived musical tension.
Gingras, Bruno; Pearce, Marcus T; Goodchild, Meghan; Dean, Roger T; Wiggins, Geraint; McAdams, Stephen
2016-04-01
This research explored the relations between the predictability of musical structure, expressive timing in performance, and listeners' perceived musical tension. Studies analyzing the influence of expressive timing on listeners' affective responses have been constrained by the fact that, in most pieces, the notated durations limit performers' interpretive freedom. To circumvent this issue, we focused on the unmeasured prelude, a semi-improvisatory genre without notated durations. In Experiment 1, 12 professional harpsichordists recorded an unmeasured prelude on a harpsichord equipped with a MIDI console. Melodic expectation was assessed using a probabilistic model (IDyOM [Information Dynamics of Music]) whose expectations have been previously shown to match closely those of human listeners. Performance timing information was extracted from the MIDI data using a score-performance matching algorithm. Time-series analyses showed that, in a piece with unspecified note durations, the predictability of melodic structure measurably influenced tempo fluctuations in performance. In Experiment 2, another 10 harpsichordists, 20 nonharpsichordist musicians, and 20 nonmusicians listened to the recordings from Experiment 1 and rated the perceived tension continuously. Granger causality analyses were conducted to investigate predictive relations among melodic expectation, expressive timing, and perceived tension. Although melodic expectation, as modeled by IDyOM, modestly predicted perceived tension for all participant groups, neither of its components, information content or entropy, was Granger causal. In contrast, expressive timing was a strong predictor and was Granger causal. However, because melodic expectation was also predictive of expressive timing, our results outline a complete chain of influence from predictability of melodic structure via expressive performance timing to perceived musical tension. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce
2012-01-01
Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).
IA-Regional-Radio - Social Network for Radio Recommendation
NASA Astrophysics Data System (ADS)
Dziczkowski, Grzegorz; Bougueroua, Lamine; Wegrzyn-Wolska, Katarzyna
This chapter describes the functions of a system proposed for the music hit recommendation from social network data base. This system carries out the automatic collection, evaluation and rating of music reviewers and the possibility for listeners to rate musical hits and recommendations deduced from auditor's profiles in the form of regional Internet radio. First, the system searches and retrieves probable music reviews from the Internet. Subsequently, the system carries out an evaluation and rating of those reviews. From this list of music hits, the system directly allows notation from our application. Finally, the system automatically creates the record list diffused each day depending on the region, the year season, the day hours and the age of listeners. Our system uses linguistics and statistic methods for classifying music opinions and data mining techniques for recommendation part needed for recorded list creation. The principal task is the creation of popular intelligent radio adaptive on auditor's age and region - IA-Regional-Radio.
Demonstration and Research Program for Teaching Young String Players. Final Report.
ERIC Educational Resources Information Center
Yarborough, William
This report explains a system for rapidly training beginning students in the technical aspects of playing a stringed instrument. The program also affords them a well-rounded, basic knowledge of music. A "numerical" method of notation and concentrated muscular exercises greatly speeded the technical learning process. The daily coordination of ear…
Comparison of emotion recognition from facial expression and music.
Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija
2011-01-01
The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce
2011-01-01
Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258
Apply lightweight recognition algorithms in optical music recognition
NASA Astrophysics Data System (ADS)
Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen-Khac, Tung-Anh; Tran, Minh-Triet
2015-02-01
The problems of digitalization and transformation of musical scores into machine-readable format are necessary to be solved since they help people to enjoy music, to learn music, to conserve music sheets, and even to assist music composers. However, the results of existing methods still require improvements for higher accuracy. Therefore, the authors propose lightweight algorithms for Optical Music Recognition to help people to recognize and automatically play musical scores. In our proposal, after removing staff lines and extracting symbols, each music symbol is represented as a grid of identical M ∗ N cells, and the features are extracted and classified with multiple lightweight SVM classifiers. Through experiments, the authors find that the size of 10 ∗ 12 cells yields the highest precision value. Experimental results on the dataset consisting of 4929 music symbols taken from 18 modern music sheets in the Synthetic Score Database show that our proposed method is able to classify printed musical scores with accuracy up to 99.56%.
Brain correlates of musical and facial emotion recognition: evidence from the dementias.
Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R
2012-07-01
The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.
Music Education Intervention Improves Vocal Emotion Recognition
ERIC Educational Resources Information Center
Mualem, Orit; Lavidor, Michal
2015-01-01
The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…
A STUDY OF THE RELATIONSHIP BETWEEN THE PERCEPTION OF MUSICAL PROCESSES AND THE ENJOYMENT OF MUSIC.
ERIC Educational Resources Information Center
DUERKSEN, GEORGE L.
STUDENT RECOGNITION OF THEMES IN MUSIC THAT WERE REPEATED OR ALTERED THROUGHOUT 14 MUSICAL ITEMS WAS MEASURED BY USE OF AN AUDIOVISUAL TESTING DEVICE. AFFECTIVE RESPONSE TO THE THEMES WAS INDICATED, USING A SEVEN-POINT SCALE OF LIKE-DISLIKE. ASSOCIATIONS BETWEEN THE MEASURED RECOGNITION AND SUCH ITEMS AS MUSICAL EXPERIENCE, ACADEMIC APTITUDE, AND…
Music as therapy in early history.
Thaut, Michael H
2015-01-01
The notion of music as therapy is based on ancient cross-cultural beliefs that music can have a "healing" effect on mind and body. Explanations for the therapeutic mechanisms in music have almost always included cultural and social science-based causalities about the uses and functions of music in society. However, it is also important to note that the view of music as "therapy" was also always strongly influenced by the view and understanding of the concepts and causes of disease. Magical/mystical concepts of illness and "rational" medicine probably lived side by side for thousands of years. Not until the late-nineteenth and early-twentieth centuries were the scientific foundations of medicine established, which allowed the foundations of music in therapy to progress from no science to soft science and most recently to actual brain science. Evidence for "early music therapy" will be discussed in four broad historical-cultural divisions: preliterate cultures; early civilizations in Mesopotamia, Egypt, Israel; Greek Antiquity; Middle Ages, Renaissance, and Baroque. In reviewing "early music therapy" practice, from mostly unknown periods of early history (using preliterate cultures as a window) to increasingly better documented times, including preserved notation samples of actual "healing" music, five theories and applications of early music therapy can be differentiated. © 2015 Elsevier B.V. All rights reserved.
Music and language: musical alexia and agraphia.
Brust, J C
1980-06-01
Two aphasic right-handed professional musicians with left hemispheric lesions had disturbed musical function, especially musical alexia and agraphia. In Case 1 aphasia was of transcortical sensory type, with severe agraphia and decreased comprehension of written words, although she could match them with pictures. Except for reading and writing, musical ability was normal; she could sing in five languages. Musical alexia and agraphia affected pitch symbols more than rhythm. Case 2 had conduction aphasia and severe expressive amusia, especially for rhythm. Although his language alexia and agraphia were milder than Case 1's, his musical alexia and agraphia were more severe, affecting rhythm as much as pitch. In neither patient were those aspects of musical notation either closest to verbal language or most dependent upon temporal (sequential) processing maximally impaired. These cases are consistent with the literature in suggesting that the presence or absence of aphasia or of right or left hemispheric damage fails to predict the presence, type, or severity of amusia, including musical alexia and agraphia. The popular notion that receptive amusia follows lesions of the language-dominant temporal lobe, whereas expressive amusia follows non-dominant frontal lobe damage, is an over-simplification, as is the view that increasing musical sophistication causes a shift of musical processing from the right hemisphere to the left.
Omar, Rohani; Henley, Susie M.D.; Bartlett, Jonathan W.; Hailstone, Julia C.; Gordon, Elizabeth; Sauter, Disa A.; Frost, Chris; Scott, Sophie K.; Warren, Jason D.
2011-01-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. PMID:21385617
Omar, Rohani; Henley, Susie M D; Bartlett, Jonathan W; Hailstone, Julia C; Gordon, Elizabeth; Sauter, Disa A; Frost, Chris; Scott, Sophie K; Warren, Jason D
2011-06-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. Copyright © 2011 Elsevier Inc. All rights reserved.
[Influence of music different in volume and style on human recognition activity].
Pavlygina, R A; Sakharov, D S; Davydov, V I; Avdonkin, A V
2009-01-01
The efficiency of recognition of masked visual images (Arabic numerals) under conditions of listening to classical (intensity 62 dB) or rock music (25 dB) increased. Coherence of potential in the frontal cortical region characteristic of the masked image recognition increased under conditions of listening to music. The changes in intercenter EEG relations were correlated with the formation of "the recognition dominant" at the behavioral level. Such behavioral and EEG changes were not observed during listening to louder music (85 dB) and listening to music of other styles, however, the coherence between potentials of the temporal and motor areas of the right hemisphere increased, and the latency of hand motor reactions decreased. The results suggest that the "recognition dominant" is formed under conditions of establishment of certain relations between the levels of excitation in the corresponding centers. These findings should be taken into consideration in case if it were necessary to increase the efficiency of the recognition.
PATRON: Using a Multimedia Digital Library for Learning and Teaching in the Performing Arts.
ERIC Educational Resources Information Center
Lyon, Elizabeth
The creation and application of a multimedia digital library to support learning and teaching in the performing arts is described. PATRON (Performing Arts Teaching Resources Online) delivers audio, video, music scores, dance notation, and theater scripts to the desktop via an innovative Web-based interface. Digital objects are linked subjectively…
Influence of music with different volumes and styles on recognition activity in humans.
Pavlygina, R A; Sakharov, D S; Davydov, V I; Avdonkin, A V
2010-10-01
The efficiency of the recognition of masked visual images (Arabic numerals) increased when accompanied by classical (62 dB) and rock music (25 dB). These changes were accompanied by increases in the coherence of potentials in the frontal areas seen on recognition without music. Changes in intercenter EEG relationships correlated with the formation a dominant at the behavioral level. When loud music (85 dB) and music of other styles was used, these changes in behavior and the EEG were not seen; however, the coherence of potentials in the temporal and motor cortex of the right hemisphere increased and the latent periods of motor reactions of the hands decreased. These results provide evidence that the "recognition" dominant is formed when there are particular ratios of the levels of excitation in the corresponding centers, which should be considered when there is a need to increase the efficiency of recognition activity in humans.
Music Recognition in Frontotemporal Lobar Degeneration and Alzheimer Disease
Johnson, Julene K; Chang, Chiung-Chih; Brambati, Simona M; Migliaccio, Raffaella; Gorno-Tempini, Maria Luisa; Miller, Bruce L; Janata, Petr
2013-01-01
Objective To compare music recognition in patients with frontotemporal dementia, semantic dementia, Alzheimer disease, and controls and to evaluate the relationship between music recognition and brain volume. Background Recognition of familiar music depends on several levels of processing. There are few studies about how patients with dementia recognize familiar music. Methods Subjects were administered tasks that assess pitch and melody discrimination, detection of pitch errors in familiar melodies, and naming of familiar melodies. Results There were no group differences on pitch and melody discrimination tasks. However, patients with semantic dementia had considerable difficulty naming familiar melodies and also scored the lowest when asked to identify pitch errors in the same melodies. Naming familiar melodies, but not other music tasks, was strongly related to measures of semantic memory. Voxel-based morphometry analysis of brain MRI showed that difficulty in naming songs was associated with the bilateral temporal lobes and inferior frontal gyrus, whereas difficulty in identifying pitch errors in familiar melodies correlated with primarily the right temporal lobe. Conclusions The results support a view that the anterior temporal lobes play a role in familiar melody recognition, and that musical functions are affected differentially across forms of dementia. PMID:21617528
Multivariate predictors of music perception and appraisal by adult cochlear implant users.
Gfeller, Kate; Oleson, Jacob; Knutson, John F; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol
2008-02-01
The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music.
Cheng, Xiaoting; Liu, Yangwenyi; Shu, Yilai; Tao, Duo-Duo; Wang, Bing; Yuan, Yasheng; Galvin, John J; Fu, Qian-Jie; Chen, Bing
2018-01-01
Due to limited spectral resolution, cochlear implants (CIs) do not convey pitch information very well. Pitch cues are important for perception of music and tonal language; it is possible that music training may improve performance in both listening tasks. In this study, we investigated music training outcomes in terms of perception of music, lexical tones, and sentences in 22 young (4.8 to 9.3 years old), prelingually deaf Mandarin-speaking CI users. Music perception was measured using a melodic contour identification (MCI) task. Speech perception was measured for lexical tones and sentences presented in quiet. Subjects received 8 weeks of MCI training using pitch ranges not used for testing. Music and speech perception were measured at 2, 4, and 8 weeks after training was begun; follow-up measures were made 4 weeks after training was stopped. Mean baseline performance was 33.2%, 76.9%, and 45.8% correct for MCI, lexical tone recognition, and sentence recognition, respectively. After 8 weeks of MCI training, mean performance significantly improved by 22.9, 14.4, and 14.5 percentage points for MCI, lexical tone recognition, and sentence recognition, respectively ( p < .05 in all cases). Four weeks after training was stopped, there was no significant change in posttraining music and speech performance. The results suggest that music training can significantly improve pediatric Mandarin-speaking CI users' music and speech perception.
Lima, César F; Garrett, Carolina; Castro, São Luís
2013-01-01
Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
Measure for measure: curriculum requirements and children's achievement in music education.
Bond, Trevor; Bond, Marie
2010-01-01
Children in all public primary schools in Queensland, Australia have weekly music lessons designed to develop key musical concepts such as reading, writing, singing and playing simple music notation. Their understanding of basic musical concepts is developed through a blend of kinaesthetic, visual and auditory experiences. In keeping with the pedagogical principles outlined by the Hungarian composer, Zoltan Kodaly, early musical experiences are based in singing well-known children's chants - usually restricted to notes of the pentatonic scale. In order to determine the extent to which primary school children's musical understandings developed in response to these carefully structured developmental learning experiences, the Queensland Primary Music Curriculum was examined to yield a set of over 70 indicators of musical understanding in the areas of rhythm, melody and part-work,the essential skills for choral singing. Data were collected from more than 400 children's attempts at elicited musical performances. Quantitative data analysis procedures derived from the Rasch model for measurement were used to establish the sequence of children's mastery of key musical concepts. Results suggested that while the music curriculum did reflect the general development of musical concepts, the grade allocation for a few concepts needed to be revised. Subsequently, children's performances over several years were also analysed to track the musical achievements of students over time. The empirical evidence confirmed that children's musical development was enhanced by school learning and that indicators can be used to identify both outstanding and atypical development of musical understanding. It was concluded that modest adjustments to the music curriculum might enhance children's learning opportunities in music.
Multivariate Predictors of Music Perception and Appraisal by Adult Cochlear Implant Users
Gfeller, Kate; Oleson, Jacob; Knutson, John F.; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol
2009-01-01
The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. PMID:18669126
Sikka, Ritu; Cuddy, Lola L.; Johnsrude, Ingrid S.; Vanstone, Ashley D.
2015-01-01
Several studies of semantic memory in non-musical domains involving recognition of items from long-term memory have shown an age-related shift from the medial temporal lobe structures to the frontal lobe. However, the effects of aging on musical semantic memory remain unexamined. We compared activation associated with recognition of familiar melodies in younger and older adults. Recognition follows successful retrieval from the musical lexicon that comprises a lifetime of learned musical phrases. We used the sparse-sampling technique in fMRI to determine the neural correlates of melody recognition by comparing activation when listening to familiar vs. unfamiliar melodies, and to identify age differences. Recognition-related cortical activation was detected in the right superior temporal, bilateral inferior and superior frontal, left middle orbitofrontal, bilateral precentral, and left supramarginal gyri. Region-of-interest analysis showed greater activation for younger adults in the left superior temporal gyrus and for older adults in the left superior frontal, left angular, and bilateral superior parietal regions. Our study provides powerful evidence for these musical memory networks due to a large sample (N = 40) that includes older adults. This study is the first to investigate the neural basis of melody recognition in older adults and to compare the findings to younger adults. PMID:26500480
Midorikawa, Akira
2007-08-01
This report reviewed recent cases of amusia and drew the following conclusions. First, amusia is an ill-defined condition. The classical definition restricted amusia to musical disorders caused by brain lesions. By the end of the last century, however, some researchers included developmental or innate musical disorders in amusia. Second, although recent case reports were based on the classical schema of amusia, there have been an increasing number of case studies that have described more restricted and specific symptoms, such as receptive amusia for harmony or musical alexia for rhythm notation. Third, although we can now obtain more accurate information about the brain lesions, we have not taken advantage of this information. Traditionally, it has been thought that the pitch element of vocal performance is referred to the right frontal or temporal lobe. Lastly, the relationship between musical function and degenerative disease deserves attention. Degenerative diseases can cause either a musical deficit or, paradoxically, improve musical function. For example, the musical competence of some patients improved after selective atrophy of the left hemisphere. In conclusion, recent ideas concerning the relationship between music and the brain have been derived from patients with brain damage, developmental disorders, and degenerative diseases. However, there is a missing link with respect to amusia. We know a lot about the cognitive aspect of music, but the 'true' function of music from an evolutionary perspective, something that is lacking in amusia, is not known.
Kang, Robert; Nimmons, Grace Liu; Drennan, Ward; Longnion, Jeff; Ruffin, Chad; Nie, Kaibao; Won, Jong Ho; Worman, Tina; Yueh, Bevan; Rubinstein, Jay
2009-08-01
Assessment of cochlear implant outcomes centers around speech discrimination. Despite dramatic improvements in speech perception, music perception remains a challenge for most cochlear implant users. No standardized test exists to quantify music perception in a clinically practical manner. This study presents the University of Washington Clinical Assessment of Music Perception (CAMP) test as a reliable and valid music perception test for English-speaking, adult cochlear implant users. Forty-two cochlear implant subjects were recruited from the University of Washington Medical Center cochlear implant program and referred by two implant manufacturers. Ten normal-hearing volunteers were drawn from the University of Washington Medical Center and associated campuses. A computer-driven, self-administered test was developed to examine three specific aspects of music perception: pitch direction discrimination, melody recognition, and timbre recognition. The pitch subtest used an adaptive procedure to determine just-noticeable differences for complex tone pitch direction discrimination within the range of 1 to 12 semitones. The melody and timbre subtests assessed recognition of 12 commonly known melodies played with complex tones in an isochronous manner and eight musical instruments playing an identical five-note sequence, respectively. Testing was repeated for cochlear implant subjects to evaluate test-retest reliability. Normal-hearing volunteers were also tested to demonstrate differences in performance in the two populations. For cochlear implant subjects, pitch direction discrimination just-noticeable differences ranged from 1 to 8.0 semitones (Mean = 3.0, SD = 2.3). Melody and timbre recognition ranged from 0 to 94.4% correct (mean = 25.1, SD = 22.2) and 20.8 to 87.5% (mean = 45.3, SD = 16.2), respectively. Each subtest significantly correlated at least moderately with both Consonant-Nucleus-Consonant (CNC) word recognition scores and spondee recognition thresholds in steady state noise and two-talker babble. Intraclass coefficients demonstrating test-retest correlations for pitch, melody, and timbre were 0.85, 0.92, and 0.69, respectively. Normal-hearing volunteers had a mean pitch direction discrimination threshold of 1.0 semitone, the smallest interval tested, and mean melody and timbre recognition scores of 87.5 and 94.2%, respectively. The CAMP test discriminates a wide range of music perceptual ability in cochlear implant users. Moderate correlations were seen between music test results and both Consonant-Nucleus-Consonant word recognition scores and spondee recognition thresholds in background noise. Test-retest reliability was moderate to strong. The CAMP test provides a reliable and valid metric for a clinically practical, standardized evaluation of music perception in adult cochlear implant users.
Slevc, L Robert; Rosenberg, Jason C; Patel, Aniruddh D
2009-04-01
Linguistic processing, especially syntactic processing, is often considered a hallmark of human cognition; thus, the domain specificity or domain generality of syntactic processing has attracted considerable debate. The present experiments address this issue by simultaneously manipulating syntactic processing demands in language and music. Participants performed self-paced reading of garden path sentences, in which structurally unexpected words cause temporary syntactic processing difficulty. A musical chord accompanied each sentence segment, with the resulting sequence forming a coherent chord progression. When structurally unexpected words were paired with harmonically unexpected chords, participants showed substantially enhanced garden path effects. No such interaction was observed when the critical words violated semantic expectancy or when the critical chords violated timbral expectancy. These results support a prediction of the shared syntactic integration resource hypothesis (Patel, 2003), which suggests that music and language draw on a common pool of limited processing resources for integrating incoming elements into syntactic structures. Notations of the stimuli from this study may be downloaded from pbr.psychonomic-journals.org/content/supplemental.
Music Making as a Tool for Promoting Brain Plasticity across the Life Span
Wan, Catherine Y.; Schlaug, Gottfried
2010-01-01
Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of skills over the course of a musician's lifetime. Thus, musicians offer an excellent human model for studying the brain effects of acquiring specialized sensorimotor skills. For example, musicians learn and repeatedly practice the association of motor actions with specific sound and visual patterns (musical notation) while receiving continuous multisensory feedback. This association learning can strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) while activating multimodal integration regions (e.g., around the intraparietal sulcus). We argue that training of this neural network may produce cross-modal effects on other behavioral or cognitive operations that draw on this network. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. These enhancements suggest the potential for music making as an interactive treatment or intervention for neurological and developmental disorders, as well as those associated with normal aging. PMID:20889966
Musicians and music making as a model for the study of brain plasticity
Schlaug, Gottfried
2015-01-01
Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of sensory and motor skills over the course of a musician’s lifetime. Thus, musicians offer an excellent human model for studying behavioral-cognitive as well as brain effects of acquiring, practicing, and maintaining these specialized skills. Research has shown that repeatedly practicing the association of motor actions with specific sound and visual patterns (musical notation), while receiving continuous multisensory feedback will strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) as well as multimodal integration regions. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. Furthermore, the plasticity of this system as a result of long term and intense interventions suggest the potential for music making activities (e.g., forms of singing) as an intervention for neurological and developmental disorders to learn and relearn associations between auditory and motor functions such as vocal motor functions. PMID:25725909
Music making as a tool for promoting brain plasticity across the life span.
Wan, Catherine Y; Schlaug, Gottfried
2010-10-01
Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of skills over the course of a musician's lifetime. Thus, musicians offer an excellent human model for studying the brain effects of acquiring specialized sensorimotor skills. For example, musicians learn and repeatedly practice the association of motor actions with specific sound and visual patterns (musical notation) while receiving continuous multisensory feedback. This association learning can strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) while activating multimodal integration regions (e.g., around the intraparietal sulcus). We argue that training of this neural network may produce cross-modal effects on other behavioral or cognitive operations that draw on this network. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. These enhancements suggest the potential for music making as an interactive treatment or intervention for neurological and developmental disorders, as well as those associated with normal aging.
Musicians and music making as a model for the study of brain plasticity.
Schlaug, Gottfried
2015-01-01
Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of sensory and motor skills over the course of a musician's lifetime. Thus, musicians offer an excellent human model for studying behavioral-cognitive as well as brain effects of acquiring, practicing, and maintaining these specialized skills. Research has shown that repeatedly practicing the association of motor actions with specific sound and visual patterns (musical notation), while receiving continuous multisensory feedback will strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) as well as multimodal integration regions. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. Furthermore, the plasticity of this system as a result of long term and intense interventions suggest the potential for music making activities (e.g., forms of singing) as an intervention for neurological and developmental disorders to learn and relearn associations between auditory and motor functions such as vocal motor functions. © 2015 Elsevier B.V. All rights reserved.
Music as a memory enhancer in patients with Alzheimer's disease.
Simmons-Stern, Nicholas R; Budson, Andrew E; Ally, Brandon A
2010-08-01
Musical mnemonics have a long and diverse history of popular use. In addition, music processing in general is often considered spared by the neurodegenerative effects of Alzheimer's disease (AD). Research examining these two phenomena is limited, and no work to our knowledge has explored the effectiveness of musical mnemonics in AD. The present study sought to investigate the effect of music at encoding on the subsequent recognition of associated verbal information. Lyrics of unfamiliar children's songs were presented bimodally at encoding, and visual stimuli were accompanied by either a sung or a spoken recording. Patients with AD demonstrated better recognition accuracy for the sung lyrics than the spoken lyrics, while healthy older adults showed no significant difference between the two conditions. We propose two possible explanations for these findings: first, that the brain areas subserving music processing may be preferentially spared by AD, allowing a more holistic encoding that facilitates recognition, and second, that music heightens arousal in patients with AD, allowing better attention and improved memory. Published by Elsevier Ltd.
Clinical evaluation of music perception, appraisal and experience in cochlear implant users.
Drennan, Ward R; Oleson, Jacob J; Gfeller, Kate; Crosson, Jillian; Driscoll, Virginia D; Won, Jong Ho; Anderson, Elizabeth S; Rubinstein, Jay T
2015-02-01
The objectives were to evaluate the relationships among music perception, appraisal, and experience in cochlear implant users in multiple clinical settings and to examine the viability of two assessments designed for clinical use. Background questionnaires (IMBQ) were administered by audiologists in 14 clinics in the United States and Canada. The CAMP included tests of pitch-direction discrimination, and melody and timbre recognition. The IMBQ queried users on prior musical involvement, music listening habits pre and post implant, and music appraisals. One-hundred forty-five users of Advanced Bionics and Cochlear Ltd cochlear implants. Performance on pitch direction discrimination, melody recognition, and timbre recognition tests were consistent with previous studies with smaller cohorts, as well as with more extensive protocols conducted in other centers. Relationships between perceptual accuracy and music enjoyment were weak, suggesting that perception and appraisal are relatively independent for CI users. Perceptual abilities as measured by the CAMP had little to no relationship with music appraisals and little relationship with musical experience. The CAMP and IMBQ are feasible for routine clinical use, providing results consistent with previous thorough laboratory-based investigations.
Listeners remember music they like.
Stalinski, Stephanie M; Schellenberg, E Glenn
2013-05-01
Emotions have important and powerful effects on cognitive processes. Although it is well established that memory influences liking, we sought to document whether liking influences memory. A series of 6 experiments examined whether liking is related to recognition memory for novel music excerpts. In the general method, participants listened to a set of music excerpts and rated how much they liked each one. After a delay, they heard the same excerpts plus an equal number of novel excerpts and made recognition judgments, which were then examined in conjunction with liking ratings. Higher liking ratings were associated with improved recognition performance after a 10-min (Experiment 1) or 24-hr (Experiment 2) delay between the exposure and test phases. The findings were similar when participants made liking ratings after recognition judgments (Experiments 3 and 6), when possible confounding effects of similarity and familiarity were held constant (Experiment 4), and when a deeper level of processing was encouraged for all the excerpts (Experiment 5). Recognition did not vary as a function of liking for previously unheard excerpts (Experiment 6). The results implicate a direct association between liking and recognition. Considered jointly with previous findings, it is now clear that listeners tend to like music that they remember and to remember music that they like.
Recognition of facial and musical emotions in Parkinson's disease.
Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N
2013-03-01
Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.
Musical rhythm spectra from Bach to Joplin obey a 1/f power law.
Levitin, Daniel J; Chordia, Parag; Menon, Vinod
2012-03-06
Much of our enjoyment of music comes from its balance of predictability and surprise. Musical pitch fluctuations follow a 1/f power law that precisely achieves this balance. Musical rhythms, especially those of Western classical music, are considered highly regular and predictable, and this predictability has been hypothesized to underlie rhythm's contribution to our enjoyment of music. Are musical rhythms indeed entirely predictable and how do they vary with genre and composer? To answer this question, we analyzed the rhythm spectra of 1,788 movements from 558 compositions of Western classical music. We found that an overwhelming majority of rhythms obeyed a 1/f(β) power law across 16 subgenres and 40 composers, with β ranging from ∼0.5-1. Notably, classical composers, whose compositions are known to exhibit nearly identical 1/f pitch spectra, demonstrated distinctive 1/f rhythm spectra: Beethoven's rhythms were among the most predictable, and Mozart's among the least. Our finding of the ubiquity of 1/f rhythm spectra in compositions spanning nearly four centuries demonstrates that, as with musical pitch, musical rhythms also exhibit a balance of predictability and surprise that could contribute in a fundamental way to our aesthetic experience of music. Although music compositions are intended to be performed, the fact that the notated rhythms follow a 1/f spectrum indicates that such structure is no mere artifact of performance or perception, but rather, exists within the written composition before the music is performed. Furthermore, composers systematically manipulate (consciously or otherwise) the predictability in 1/f rhythms to give their compositions unique identities.
From the Cover: Musical rhythm spectra from Bach to Joplin obey a 1/f power law
NASA Astrophysics Data System (ADS)
Levitin, Daniel J.; Chordia, Parag; Menon, Vinod
2012-03-01
Much of our enjoyment of music comes from its balance of predictability and surprise. Musical pitch fluctuations follow a 1/f power law that precisely achieves this balance. Musical rhythms, especially those of Western classical music, are considered highly regular and predictable, and this predictability has been hypothesized to underlie rhythm's contribution to our enjoyment of music. Are musical rhythms indeed entirely predictable and how do they vary with genre and composer? To answer this question, we analyzed the rhythm spectra of 1,788 movements from 558 compositions of Western classical music. We found that an overwhelming majority of rhythms obeyed a 1/fβ power law across 16 subgenres and 40 composers, with β ranging from ∼0.5-1. Notably, classical composers, whose compositions are known to exhibit nearly identical 1/f pitch spectra, demonstrated distinctive 1/f rhythm spectra: Beethoven's rhythms were among the most predictable, and Mozart's among the least. Our finding of the ubiquity of 1/f rhythm spectra in compositions spanning nearly four centuries demonstrates that, as with musical pitch, musical rhythms also exhibit a balance of predictability and surprise that could contribute in a fundamental way to our aesthetic experience of music. Although music compositions are intended to be performed, the fact that the notated rhythms follow a 1/f spectrum indicates that such structure is no mere artifact of performance or perception, but rather, exists within the written composition before the music is performed. Furthermore, composers systematically manipulate (consciously or otherwise) the predictability in 1/f rhythms to give their compositions unique identities.
Clinical evaluation of music perception, appraisal and experience in cochlear implant users
Drennan, Ward. R.; Oleson, Jacob J.; Gfeller, Kate; Crosson, Jillian; Driscoll, Virginia D.; Won, Jong Ho; Anderson, Elizabeth S.; Rubinstein, Jay T.
2014-01-01
Objectives The objectives were to evaluate the relationships among music perception, appraisal, and experience in cochlear implant users in multiple clinical settings and to examine the viability of two assessments designed for clinical use. Design Background questionnaires (IMBQ) were administered by audiologists in 14 clinics in the United States and Canada. The CAMP included tests of pitch-direction discrimination, and melody and timbre recognition. The IMBQ queried users on prior musical involvement, music listening habits pre and post implant, and music appraisals. Study sample One-hundred forty-five users of Advanced Bionics and Cochlear Ltd cochlear implants. Results Performance on pitch direction discrimination, melody recognition, and timbre recognition tests were consistent with previous studies with smaller cohorts, as well as with more extensive protocols conducted in other centers. Relationships between perceptual accuracy and music enjoyment were weak, suggesting that perception and appraisal are relatively independent for CI users. Conclusions Perceptual abilities as measured by the CAMP had little to no relationship with music appraisals and little relationship with musical experience. The CAMP and IMBQ are feasible for routine clinical use, providing results consistent with previous thorough laboratory-based investigations. PMID:25177899
Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.
Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina
2017-01-01
Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.
Speech-recognition interfaces for music information retrieval
NASA Astrophysics Data System (ADS)
Goto, Masataka
2005-09-01
This paper describes two hands-free music information retrieval (MIR) systems that enable a user to retrieve and play back a musical piece by saying its title or the artist's name. Although various interfaces for MIR have been proposed, speech-recognition interfaces suitable for retrieving musical pieces have not been studied. Our MIR-based jukebox systems employ two different speech-recognition interfaces for MIR, speech completion and speech spotter, which exploit intentionally controlled nonverbal speech information in original ways. The first is a music retrieval system with the speech-completion interface that is suitable for music stores and car-driving situations. When a user only remembers part of the name of a musical piece or an artist and utters only a remembered fragment, the system helps the user recall and enter the name by completing the fragment. The second is a background-music playback system with the speech-spotter interface that can enrich human-human conversation. When a user is talking to another person, the system allows the user to enter voice commands for music playback control by spotting a special voice-command utterance in face-to-face or telephone conversations. Experimental results from use of these systems have demonstrated the effectiveness of the speech-completion and speech-spotter interfaces. (Video clips: http://staff.aist.go.jp/m.goto/MIR/speech-if.html)
Khatchatourov, Armen; Pachet, François; Rowe, Victoria
2016-01-01
The generation of musical material in a given style has been the subject of many studies with the increased sophistication of artificial intelligence models of musical style. In this paper we address a question of primary importance for artificial intelligence and music psychology: can such systems generate music that users indeed consider as corresponding to their own style? We address this question through an experiment involving both performance and recognition tasks with musically naïve school-age children. We asked 56 children to perform a free-form improvisation from which two kinds of music excerpt were created. One was a mere recording of original performances. The other was created by a software program designed to simulate the participants' style, based on their original performances. Two hours after the performance task, the children completed the recognition task in two conditions, one with the original excerpts and one with machine-generated music. Results indicate that the success rate is practically equivalent in two conditions: children tended to make correct attribution of the excerpts to themselves or to others, whether the music was human-produced or machine-generated (mean accuracy = 0.75 and = 0.71, respectively). We discuss this equivalence in accuracy for machine-generated and human produced music in the light of the literature on memory effects and action identity which addresses the recognition of one's own production.
Khatchatourov, Armen; Pachet, François; Rowe, Victoria
2016-01-01
The generation of musical material in a given style has been the subject of many studies with the increased sophistication of artificial intelligence models of musical style. In this paper we address a question of primary importance for artificial intelligence and music psychology: can such systems generate music that users indeed consider as corresponding to their own style? We address this question through an experiment involving both performance and recognition tasks with musically naïve school-age children. We asked 56 children to perform a free-form improvisation from which two kinds of music excerpt were created. One was a mere recording of original performances. The other was created by a software program designed to simulate the participants' style, based on their original performances. Two hours after the performance task, the children completed the recognition task in two conditions, one with the original excerpts and one with machine-generated music. Results indicate that the success rate is practically equivalent in two conditions: children tended to make correct attribution of the excerpts to themselves or to others, whether the music was human-produced or machine-generated (mean accuracy = 0.75 and = 0.71, respectively). We discuss this equivalence in accuracy for machine-generated and human produced music in the light of the literature on memory effects and action identity which addresses the recognition of one's own production. PMID:27199788
Audio-based deep music emotion recognition
NASA Astrophysics Data System (ADS)
Liu, Tong; Han, Li; Ma, Liangkai; Guo, Dongwei
2018-05-01
As the rapid development of multimedia networking, more and more songs are issued through the Internet and stored in large digital music libraries. However, music information retrieval on these libraries can be really hard, and the recognition of musical emotion is especially challenging. In this paper, we report a strategy to recognize the emotion contained in songs by classifying their spectrograms, which contain both the time and frequency information, with a convolutional neural network (CNN). The experiments conducted on the l000-song dataset indicate that the proposed model outperforms traditional machine learning method.
Accuracy of cochlear implant recipients in speech reception in the presence of background music.
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-12-01
This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.
NASA Astrophysics Data System (ADS)
Pérez Rosas, Osvaldo G.; Rivera Martínez, José L.; Maldonado Cano, Luis A.; López Rodríguez, Mario; Amaya Reyes, Laura M.; Cano Martínez, Elizabeth; García Vázquez, Mireya S.; Ramírez Acosta, Alejandro A.
2017-09-01
The automatic identification and classification of musical genres based on the sound similarities to form musical textures, it is a very active investigation area. In this context it has been created recognition systems of musical genres, formed by time-frequency characteristics extraction methods and by classification methods. The selection of this methods are important for a good development in the recognition systems. In this article they are proposed the Mel-Frequency Cepstral Coefficients (MFCC) methods as a characteristic extractor and Support Vector Machines (SVM) as a classifier for our system. The stablished parameters of the MFCC method in the system by our time-frequency analysis, represents the gamma of Mexican culture musical genres in this article. For the precision of a classification system of musical genres it is necessary that the descriptors represent the correct spectrum of each gender; to achieve this we must realize a correct parametrization of the MFCC like the one we present in this article. With the system developed we get satisfactory detection results, where the least identification percentage of musical genres was 66.67% and the one with the most precision was 100%.
Music to my ears: Age-related decline in musical and facial emotion recognition.
Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted
2017-12-01
We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Mathematical formula recognition using graph grammar
NASA Astrophysics Data System (ADS)
Lavirotte, Stephane; Pottier, Loic
1998-04-01
This paper describes current results of Ofr, a system for extracting and understanding mathematical expressions in documents. Such a tool could be really useful to be able to re-use knowledge in scientific books which are not available in electronic form. We currently also study use of this system for direct input of formulas with a graphical tablet for computer algebra system softwares. Existing solutions for mathematical recognition have problems to analyze 2D expressions like vectors and matrices. This is because they often try to use extended classical grammar to analyze formulas, relatively to baseline. But a lot of mathematical notations do not respect rules for such a parsing and that is the reason why they fail to extend text parsing technic. We investigate graph grammar and graph rewriting as a solution to recognize 2D mathematical notations. Graph grammar provide a powerful formalism to describe structural manipulations of multi-dimensional data. The main two problems to solve are ambiguities between rules of grammar and construction of graph.
Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-01-01
Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550
Moore, Kimberly Sena; Peterson, David A; O'Shea, Geoffrey; McIntosh, Gerald C; Thaut, Michael H
2008-01-01
Research shows that people with multiple sclerosis exhibit learning and memory difficulties and that music can be used successfully as a mnemonic device to aid in learning and memory. However, there is currently no research investigating the effectiveness of music mnemonics as a compensatory learning strategy for people with multiple sclerosis. Participants with clinically definitive multiple sclerosis (N = 38) were given a verbal learning and memory test. Results from a recognition memory task were analyzed that compared learning through music (n = 20) versus learning through speech (n = 18). Preliminary baseline neuropsychological data were collected that measured executive functioning skills, learning and memory abilities, sustained attention, and level of disability. An independent samples t test showed no significant difference between groups on baseline neuropsychological functioning or on recognition task measures. Correlation analyses suggest that music mnemonics may facilitate learning for people who are less impaired by the disease. Implications for future research are discussed.
Signoret, J L; van Eeckhout, P; Poncet, M; Castaigne, P
1987-01-01
A 77 year old right handed male was blind since the age of 2. He presented with an infarction involving the territory of the left middle cerebral artery involving the temporal and the inferior parietal lobes. He had learned to read and write language as well as read and write music in braille, ultimately becoming a famous organist and composer. There were no motor or sensory deficits. Wernicke's aphasia with jargonaphasia, major difficulty in repetition, anomia and a significant comprehension deficit without word deafness was present; verbal alexia and agraphia in braille were also present. There was no evidence of amusia. He could execute in an exemplary fashion pieces of music for the organ in his repertory as well as improvise. All his musical capabilities: transposition, modulation, harmony, rythm, were preserved. The musical notation in braille remained intact: he could read by touch and play unfamiliar scores, he could also read and sing the musical notes, he could copy and write a score. Nine months after the stroke his aphasia remained unchanged. Nevertheless he composed pieces for the organ which were published. Such data highly suggest the independence of linguistic and musical competences, defined as the analysis and organization of sounds according to the rules of music. This independence in an extremely talented musician leads to a discussion of the role of the right hemisphere in the anatomical-functional processes at the origin of musical competence. The use of braille in which the same constellations of dots correspond either to letters of the alphabet or musical notes supports the independence between language and music.
Neural Correlates of Music Recognition in Down Syndrome
ERIC Educational Resources Information Center
Virji-Babul, N.; Moiseev, A.; Sun, W.; Feng, T.; Moiseeva, N.; Watt, K. J.; Huotilainen, M.
2013-01-01
The brain mechanisms that subserve music recognition remain unclear despite increasing interest in this process. Here we report the results of a magnetoencephalography experiment to determine the temporal dynamics and spatial distribution of brain regions activated during listening to a familiar and unfamiliar instrumental melody in control adults…
EDP Applications to Musical Bibliography: Input Considerations
ERIC Educational Resources Information Center
Robbins, Donald C.
1972-01-01
The application of Electronic Data Processing (EDP) has been a boon in the analysis and bibliographic control of music. However, an extra step of encoding must be undertaken for input of music. The best hope to facilitate musical input is the development of an Optical Character Recognition (OCR) music-reading machine. (29 references) (Author/NH)
Music-based memory enhancement in Alzheimer's disease: promise and limitations.
Simmons-Stern, Nicholas R; Deason, Rebecca G; Brandler, Brian J; Frustace, Bruno S; O'Connor, Maureen K; Ally, Brandon A; Budson, Andrew E
2012-12-01
In a previous study (Simmons-Stern, Budson & Ally, 2010), we found that patients with Alzheimer's disease (AD) better recognized visually presented lyrics when the lyrics were also sung rather than spoken at encoding. The present study sought to further investigate the effects of music on memory in patients with AD by making the content of the song lyrics relevant for the daily life of an older adult and by examining how musical encoding alters several different aspects of episodic memory. Patients with AD and healthy older adults studied visually presented novel song lyrics related to instrumental activities of daily living (IADL) that were accompanied by either a sung or a spoken recording. Overall, participants performed better on a memory test of general lyric content for lyrics that were studied sung as compared to spoken. However, on a memory test of specific lyric content, participants performed equally well for sung and spoken lyrics. We interpret these results in terms of a dual-process model of recognition memory such that the general content questions represent a familiarity-based representation that is preferentially sensitive to enhancement via music, while the specific content questions represent a recollection-based representation unaided by musical encoding. Additionally, in a test of basic recognition memory for the audio stimuli, patients with AD demonstrated equal discrimination for sung and spoken stimuli. We propose that the perceptual distinctiveness of musical stimuli enhanced metamemorial awareness in AD patients via a non-selective distinctiveness heuristic, thereby reducing false recognition while at the same time reducing true recognition and eliminating the mnemonic benefit of music. These results are discussed in the context of potential music-based memory enhancement interventions for the care of patients with AD. Published by Elsevier Ltd.
Music-Based Memory Enhancement in Alzheimer’s Disease: Promise and Limitations
Simmons-Stern, Nicholas R.; Deason, Rebecca G.; Brandler, Brian J.; Frustace, Bruno S.; O’Connor, Maureen K.; Ally, Brandon A.; Budson, Andrew E.
2012-01-01
In a previous study (Simmons-Stern, Budson, & Ally 2010), we found that patients with Alzheimer’s disease (AD) better recognized visually presented lyrics when the lyrics were also sung rather than spoken at encoding. The present study sought to further investigate the effects of music on memory in patients with AD by making the content of the song lyrics relevant for the daily life of an older adult and by examining how musical encoding alters several different aspects of episodic memory. Patients with AD and healthy older adults studied visually presented novel song lyrics related to instrumental activities of daily living (IADL) that were accompanied by either a sung or a spoken recording. Overall, participants performed better on a memory test of general lyric content for lyrics that were studied sung as compared to spoken. However, on a memory test of specific lyric content, participants performed equally well for sung and spoken lyrics. We interpret these results in terms of a dual-process model of recognition memory such that the general content questions represent a familiarity-based representation that is preferentially sensitive to enhancement via music, while the specific content questions represent a recollection-based representation unaided by musical encoding. Additionally, in a test of basic recognition memory for the audio stimuli, patients with AD demonstrated equal discrimination for sung and spoken stimuli. We propose that the perceptual distinctiveness of musical stimuli enhanced metamemorial awareness in AD patients via a non-selective distinctiveness heuristic, thereby reducing false recognition while at the same time reducing true recognition and eliminating the mnemonic benefit of music. These results are discussed in the context of potential music-based memory enhancement interventions for the care of patients with AD. PMID:23000133
Harris, Robert; de Jong, Bauke M
2015-10-22
Using fMRI, cerebral activations were studied in 24 classically-trained keyboard performers and 12 musically unskilled control subjects. Two groups of musicians were recruited: improvising (n=12) and score-dependent (non-improvising) musicians (n=12). While listening to both familiar and unfamiliar music, subjects either (covertly) appraised the presented music performance or imagined they were playing the music themselves. We hypothesized that improvising musicians would exhibit enhanced efficiency of audiomotor transformation reflected by stronger ventral premotor activation. Statistical Parametric Mapping revealed that, while virtually 'playing along׳ with the music, improvising musicians exhibited activation of a right-hemisphere distribution of cerebral areas including posterior-superior parietal and dorsal premotor cortex. Involvement of these right-hemisphere dorsal stream areas suggests that improvising musicians recruited an amodal spatial processing system subserving pitch-to-space transformations to facilitate their virtual motor performance. Score-dependent musicians recruited a primarily left-hemisphere pattern of motor areas together with the posterior part of the right superior temporal sulcus, suggesting a relationship between aural discrimination and symbolic representation. Activations in bilateral auditory cortex were significantly larger for improvising musicians than for score-dependent musicians, suggesting enhanced top-down effects on aural perception. Our results suggest that learning to play a music instrument primarily from notation predisposes musicians toward aural identification and discrimination, while learning by improvisation involves audio-spatial-motor transformations, not only during performance, but also perception. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Cuddy, Lola L; Duffin, Jacalyn
2005-01-01
Despite intriguing and suggestive clinical observations, no formal research has assessed the possible sparing of musical recognition and memory in Alzheimer's dementia (AD). A case study is presented of an 84-year old woman with severe cognitive impairment implicating AD, but for whom music recognition and memory, according to her caregivers, appeared to be spared. The hypotheses addressed were, first, that memory for familiar music may be spared in dementia, and second, that musical recognition and memory may be reliably assessed with existing tests if behavioral observation is employed to overcome the problem of verbal or written communication. Our hypotheses were stimulated by the patient EN, for whom diagnosis of AD became probable in 2000. With severe problems in memory, language, and cognition, she now has a mini-mental status score of 8 (out of 30) and is unable to understand or recall standard instructions. In order to assess her music recognition abilities, three tests from the previous literature were adapted for behavioral observation. Two tests involved the discrimination of familiar melodies from unfamiliar melodies. The third involved the detection of distortions ("wrong" notes) in familiar melodies and discrimination of distorted melodies from melodies correctly reproduced. Test melodies were presented to EN on a CD player and her responses were observed by two test administrators. EN responded to familiar melodies by singing along, usually with the words, and often continuing to sing after the stimulus had stopped. She never responded to the unfamiliar melodies. She responded to distorted melodies with facial expressions - surprise, laughter, a frown, or an exclamation, "Oh, dear!"; she never responded in this way to the undistorted melodies. Allowing these responses as indicators of detection, the results for EN were in the normal or near normal range of scores for elderly controls. As well, lyrics to familiar melodies, spoken in a conversational voice without rhythmic or pitch clues, often prompted EN to sing the tune that correctly accompanied the lyrics. EN's results provide encouraging support for our hypotheses that sparing of musical memory may be a feature of some forms of dementia and that it may be reliably and quantitatively assessed through behavioral observation. The contrast between EN's response to music and her mini-mental status is dramatic. The article concludes with several considerations why music may be preserved in dementia and suggestions to guide future research.
Phonological Processing in Adults with Deficits in Musical Pitch Recognition
ERIC Educational Resources Information Center
Jones, Jennifer L.; Lucker, Jay; Zalewski, Christopher; Brewer, Carmen; Drayna, Dennis
2009-01-01
We identified individuals with deficits in musical pitch recognition by screening a large random population using the Distorted Tunes Test (DTT), and enrolled individuals who had DTT scores in the lowest 10th percentile, classified as tune deaf. We examined phonological processing abilities in 35 tune deaf and 34 normal control individuals. Eight…
Classical Music as Popular Music: Adolescents' Recognition of Western Art Music
ERIC Educational Resources Information Center
VanWeelden, Kimberly
2012-01-01
The purpose of this study was to determine which "popular" classical repertoire is familiar and predictable to adolescents. Specifically, the study sought to examine (1) if students had heard the music before, (2) where they had heard the music before, and (3) if they could "name that tune". Participants (N = 668) for this…
Gebauer, Line; Skewes, Joshua; Westphael, Gitte; Heaton, Pamela; Vuust, Peter
2014-01-01
Music is a potent source for eliciting emotions, but not everybody experience emotions in the same way. Individuals with autism spectrum disorder (ASD) show difficulties with social and emotional cognition. Impairments in emotion recognition are widely studied in ASD, and have been associated with atypical brain activation in response to emotional expressions in faces and speech. Whether these impairments and atypical brain responses generalize to other domains, such as emotional processing of music, is less clear. Using functional magnetic resonance imaging, we investigated neural correlates of emotion recognition in music in high-functioning adults with ASD and neurotypical adults. Both groups engaged similar neural networks during processing of emotional music, and individuals with ASD rated emotional music comparable to the group of neurotypical individuals. However, in the ASD group, increased activity in response to happy compared to sad music was observed in dorsolateral prefrontal regions and in the rolandic operculum/insula, and we propose that this reflects increased cognitive processing and physiological arousal in response to emotional musical stimuli in this group.
Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle
2017-01-01
To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.
Max Roach's Adventures in Higher Music Education.
ERIC Educational Resources Information Center
Hentoff, Nat
1980-01-01
Max Roach and the author discuss Roach's efforts to gain recognition of the complexity and importance of American musical forms, particularly jazz, by American university music departments. In addition, Roach describes his approach to marketing his music, an approach which avoids the economic exploitation often suffered by American jazz musicians.…
Musical Knowledge, Musical Identity, and the Generalist Teacher: Vicki's Story.
ERIC Educational Resources Information Center
Russell, Joan
1996-01-01
Utilizes excerpts from an undergraduate elementary education student's journal to examine generalist teachers' attitudes towards musical competency and the necessary qualifications for teachers. Traces one teacher's recognition of her own musicality and the corresponding influence on her feelings of competency to teach this subject in an…
Music recognition by Japanese children with cochlear implants.
Nakata, Takayuki; Trehub, Sandra E; Mitani, Chisato; Kanda, Yukihiko; Shibasaki, Atsuko; Schellenberg, E Glenn
2005-01-01
Congenitally deaf Japanese children with cochlear implants were tested on their recognition of theme songs from television programs that they watched regularly. The children, who were 4-9 years of age, attempted to identify each song from a closed set of alternatives. Their song identification ability was examined in the context of the original commercial recordings (vocal plus instrumental), the original versions without the words (i.e., karaoke versions), and flute versions of the melody. The children succeeded in identifying the music only from the original versions, and their performance was related to their music listening habits. Children gave favorable appraisals of the music even when they were unable to recognize it. Further research is needed to find means of enhancing cochlear implants users' perception and appreciation of music.
Contribution of hearing aids to music perception by cochlear implant users.
Peterson, Nathaniel; Bergeson, Tonya R
2015-09-01
Modern cochlear implant (CI) encoding strategies represent the temporal envelope of sounds well but provide limited spectral information. This deficit in spectral information has been implicated as a contributing factor to difficulty with speech perception in noisy conditions, discriminating between talkers and melody recognition. One way to supplement spectral information for CI users is by fitting a hearing aid (HA) to the non-implanted ear. In this study 14 postlingually deaf adults (half with a unilateral CI and the other half with a CI and an HA (CI + HA)) were tested on measures of music perception and familiar melody recognition. CI + HA listeners performed significantly better than CI-only listeners on all pitch-based music perception tasks. The CI + HA group did not perform significantly better than the CI-only group in the two tasks that relied on duration cues. Recognition of familiar melodies was significantly enhanced for the group wearing an HA in addition to their CI. This advantage in melody recognition was increased when melodic sequences were presented with the addition of harmony. These results show that, for CI recipients with aidable hearing in the non-implanted ear, using a HA in addition to their implant improves perception of musical pitch and recognition of real-world melodies.
[The role of temporal fine structure in tone recognition and music perception].
Zhou, Q; Gu, X; Liu, B
2017-11-07
The sound signal can be decomposed into temporal envelope and temporal fine structure information. The temporal envelope information is crucial for speech perception in quiet environment, and the temporal fine structure information plays an important role in speech perception in noise, Mandarin tone recognition and music perception, especially the pitch and melody perception.
Music practice is associated with development of working memory during childhood and adolescence.
Bergman Nutley, Sissela; Darki, Fahimeh; Klingberg, Torkel
2014-01-07
Practicing a musical instrument is associated with cognitive benefits and structural brain changes in correlational and interventional trials; however, the effect of musical training on cognition during childhood is still unclear. In this longitudinal study of child development we analyzed the association between musical practice and performance on reasoning, processing speed and working memory (WM) during development. Subjects (n = 352) between the ages of 6 and 25 years participated in neuropsychological assessments and neuroimaging investigations (n = 64) on two or three occasions, 2 years apart. Mixed model regression showed that musical practice had an overall positive association with WM capacity (visuo-spatial WM, F = 4.59, p = 0.033, verbal WM, F = 9.69, p = 0.002), processing speed, (F = 4.91, p = 0.027) and reasoning (Raven's progressive matrices, F = 28.34, p < 0.001) across all three time points, after correcting for the effect of parental education and other after school activities. Music players also had larger gray matter volume in the temporo-occipital and insular cortex (p = 0.008), areas previously reported to be related to musical notation reading. The change in WM between the time points was proportional to the weekly hours spent on music practice for both WM tests (VSWM, β = 0.351, p = 0.003, verbal WM, β = 0.261, p = 0.006) but this was not significant for reasoning ability (β = 0.021, p = 0.090). These effects remained when controlling for parental education and other after school activities. In conclusion, these results indicate that music practice positively affects WM development and support the importance of practice for the development of WM during childhood and adolescence.
Vieillard, Sandrine; Gilet, Anne-Laure
2013-01-01
There is mounting evidence that aging is associated with the maintenance of positive affect and the decrease of negative affect to ensure emotion regulation goals. Previous empirical studies have primarily focused on a visual or autobiographical form of emotion communication. To date, little investigation has been done on musical emotions. The few studies that have addressed aging and emotions in music were mainly interested in emotion recognition, thus leaving unexplored the question of how aging may influence emotional responses to and memory for emotions conveyed by music. In the present study, eighteen older (60–84 years) and eighteen younger (19–24 years) listeners were asked to evaluate the strength of their experienced emotion on happy, peaceful, sad, and scary musical excerpts (Vieillard et al., 2008) while facial muscle activity was recorded. Participants then performed an incidental recognition task followed by a task in which they judged to what extent they experienced happiness, peacefulness, sadness, and fear when listening to music. Compared to younger adults, older adults (a) reported a stronger emotional reactivity for happiness than other emotion categories, (b) showed an increased zygomatic activity for scary stimuli, (c) were more likely to falsely recognize happy music, and (d) showed a decrease in their responsiveness to sad and scary music. These results are in line with previous findings and extend them to emotion experience and memory recognition, corroborating the view of age-related changes in emotional responses to music in a positive direction away from negativity. PMID:24137141
Falcón-González, Juan C; Borkoski-Barreiro, Silvia; Limiñana-Cañal, José María; Ramos-Macías, Angel
2014-01-01
Music is a universal, cross-cultural phenomenon. Perception and enjoyment of music are still not solved with current technological objectives of cochlear implants. The objective of this article was to advance the development and validation of a method of programming of cochlear implants that implements a frequency allocation strategy. We compared standard programming vs frequency programming in every subject. We studied a total of 40 patients with cochlear implants. Each patient was programmed with a optimal version of the standard program, using the Custom Sound Suite 3.2 cochlear platform. Speech tests in quiet were performed using syllable word lists from the protocol for the assessment of hearing in the Spanish language. Patients implanted bilaterally were tested in both ears at the same time. For assessing music listening habits we used the Munich Music Questionnaire and «MACarena»(minimum auditory capability) software. All patients achieved better results in recognition, instrument tests and tonal scales with frequency programming (P<.005). Likewise, there were better results with frequency programming in recognising harmonics and pitch test (P<.005). Frequency programming achieves better perception and recognition results in patients in comparison with standard programming. Bilateral stimulation patients have better perception of musical patterns and better performance in recognition of tonal scales, harmonics and musical instruments compared with patients with unilateral stimulation. Modification and frequency allocation during programming allows decreased levels of current intensity and increase the dynamic range, which allows mapping of each audio band less obtrusively and improves the quality of representation of the signal. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Improved Techniques for Automatic Chord Recognition from Music Audio Signals
ERIC Educational Resources Information Center
Cho, Taemin
2014-01-01
This thesis is concerned with the development of techniques that facilitate the effective implementation of capable automatic chord transcription from music audio signals. Since chord transcriptions can capture many important aspects of music, they are useful for a wide variety of music applications and also useful for people who learn and perform…
Masking effects of speech and music: does the masker's hierarchical structure matter?
Shi, Lu-Feng; Law, Yvonne
2010-04-01
Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.
Hutter, E; Grapp, M; Argstatter, H
2016-12-01
People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.
Persistence, resistance, resonance
NASA Astrophysics Data System (ADS)
Tsadka, Maayan
Sound cannot travel in a vacuum, physically or socially. The ways in which sound operates are a result of acoustic properties, and the ways by which it is considered to be music are a result of social constructions. Therefore, music is always political, regardless of its content: the way it is performed and composed; the choice of instrumentation, notation, tuning; the medium of its distribution; its inherent hierarchy and power dynamics, and more. My compositional praxis makes me less interested in defining a relationship between music and politics than I am in erasing---or at least blurring---the borders between them. In this paper I discuss the aesthetics of resonance and echo in their metaphorical, physical, social, and musical manifestations. Also discussed is a political aesthetic of resonance, manifested through protest chants. I transcribe and analyze common protest chants from around the world, categorizing and unifying them as universal crowd-mobilizing rhythms. These ideas are explored musically in three pieces. Sumud: Rhetoric of Resistance in Three Movements, for two pianos and two percussion players, is a musical interpretation of the political/social concept of sumud, an Arabic word that literally means "steadfastness" and represents Palestinian non-violent resistance. The piece is based on common protest rhythms and uses the acoustic properties inherent to the instruments. The second piece, Three Piano Studies, extends some of the musical ideas and techniques used in Sumud, and explores the acoustic properties and resonance of the piano. The final set of pieces is part of my Critical Mess Music Project. These are site-specific musical works that attempt to blur the boundaries between audience, performers and composer, in part by including people without traditional musical training in the process of music making. These pieces use the natural structure and resonance of an environment, in this case, locations on the UCSC campus, and offer an active form of musical consumption and experience. The three pieces draw lines connecting different aspects of persistence, resistance, and resonance.
Cheng, Xiaoting; Liu, Yangwenyi; Wang, Bing; Yuan, Yasheng; Galvin, John J; Fu, Qian-Jie; Shu, Yilai; Chen, Bing
2018-01-01
The aim of this study was to investigate the benefits of residual hair cell function for speech and music perception in bimodal pediatric Mandarin-speaking cochlear implant (CI) listeners. Speech and music performance was measured in 35 Mandarin-speaking pediatric CI users for unilateral (CI-only) and bimodal listening. Mandarin speech perception was measured for vowels, consonants, lexical tones, and sentences in quiet. Music perception was measured for melodic contour identification (MCI). Combined electric and acoustic hearing significantly improved MCI and Mandarin tone recognition performance, relative to CI-only performance. For MCI, performance was significantly better with bimodal listening for all semitone spacing conditions ( p < 0.05 in all cases). For tone recognition, bimodal performance was significantly better only for tone 2 (rising; p < 0.05). There were no significant differences between CI-only and CI + HA for vowel, consonant, or sentence recognition. The results suggest that combined electric and acoustic hearing can significantly improve perception of music and Mandarin tones in pediatric Mandarin-speaking CI patients. Music and lexical tone perception depends strongly on pitch perception, and the contralateral acoustic hearing coming from residual hair cell function provided pitch cues that are generally not well preserved in electric hearing.
Improved perception of music with a harmonic based algorithm for cochlear implants.
Li, Xing; Nie, Kaibao; Imennov, Nikita S; Rubinstein, Jay T; Atlas, Les E
2013-07-01
The lack of fine structure information in conventional cochlear implant (CI) encoding strategies presumably contributes to the generally poor music perception with CIs. To improve CI users' music perception, a harmonic-single-sideband-encoder (HSSE) strategy was developed , which explicitly tracks the harmonics of a single musical source and transforms them into modulators conveying both amplitude and temporal fine structure cues to electrodes. To investigate its effectiveness, vocoder simulations of HSSE and the conventional continuous-interleaved-sampling (CIS) strategy were implemented. Using these vocoders, five normal-hearing subjects' melody and timbre recognition performance were evaluated: a significant benefit of HSSE to both melody (p < 0.002) and timbre (p < 0.026) recognition was found. Additionally, HSSE was acutely tested in eight CI subjects. On timbre recognition, a significant advantage of HSSE over the subjects' clinical strategy was demonstrated: the largest improvement was 35% and the mean 17% (p < 0.013). On melody recognition, two subjects showed 20% improvement with HSSE; however, the mean improvement of 7% across subjects was not significant (p > 0.090). To quantify the temporal cues delivered to the auditory nerve, the neural spike patterns evoked by HSSE and CIS for one melody stimulus were simulated using an auditory nerve model. Quantitative analysis demonstrated that HSSE can convey temporal pitch cues better than CIS. The results suggest that HSSE is a promising strategy to enhance music perception with CIs.
Macoir, Joël; Berubé-Lalancette, Sarah; Wilson, Maximiliano A; Laforce, Robert; Hudon, Carol; Gravel, Pierre; Potvin, Olivier; Duchesne, Simon; Monetta, Laura
2016-12-01
Music can induce particular emotions and activate semantic knowledge. In the semantic variant of primary progressive aphasia (svPPA), semantic memory is impaired as a result of anterior temporal lobe (ATL) atrophy. Semantics is responsible for the encoding and retrieval of factual knowledge about music, including associative and emotional attributes. In the present study, we report the performance of two individuals with svPPA in three experiments. NG with bilateral ATL atrophy and ND with atrophy largely restricted to the left ATL. Experiment 1 assessed the recognition of musical excerpts and both patients were unimpaired. Experiment 2 studied the emotions conveyed by music and only NG showed impaired performance. Experiment 3 tested the association of semantic concepts to musical excerpts and both patients were impaired. These results suggest that the right ATL seems essential for the recognition of emotions conveyed by music and that the left ATL is involved in binding music to semantics. They are in line with the notion that the ATLs are devoted to the binding of different modality-specific properties and suggest that they are also differentially involved in the processing of factual and emotional knowledge associated with music.
Development of a written music-recognition system using Java and open source technologies
NASA Astrophysics Data System (ADS)
Loibner, Gernot; Schwarzl, Andreas; Kovač, Matthias; Paulus, Dietmar; Pölzleitner, Wolfgang
2005-10-01
We report on the development of a software system to recognize and interpret printed music. The overall goal is to scan printed music sheets, analyze and recognize the notes, timing, and written text, and derive the all necessary information to use the computers MIDI sound system to play the music. This function is primarily useful for musicians who want to digitize printed music for editing purposes. There exist a number of commercial systems that offer such a functionality. However, on testing these systems, we were astonished on how weak they behave in their pattern recognition parts. Although we submitted very clear and rather flawless scanning input, none of these systems was able to e.g. recognize all notes, staff lines, and systems. They all require a high degree of interaction, post-processing, and editing to get a decent digital version of the hard copy material. In this paper we focus on the pattern recognition area. In a first approach we tested more or less standard methods of adaptive thresholding, blob detection, line detection, and corner detection to find the notes, staff lines, and candidate objects subject to OCR. Many of the objects on this type of material can be learned in a training phase. None of the commercial systems we saw offers the option to train special characters or unusual signatures. A second goal in this project is to use a modern software engineering platform. We were interested in how well Java and open source technologies are suitable for pattern recognition and machine vision. The scanning of music served as a case-study.
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
Music therapy students' recognition of popular song repertoire for geriatric clients.
Vanweelden, Kimberly; Juchniewicz, Jay; Cevasco, Andrea M
2008-01-01
Previous research has found that music therapists, who work with geriatric clients in singing activities, indicated they know and use 3 times more popular or popular style music (songs from musicals) than folk songs. The purposes of the current study were to determine music therapy majors' recognition of popular songs and songs from musicals by asking whether they: (a) had heard the songs before, (b) could "name the tune" of each song, and (c) list the decade each song was composed. Results showed that students had previously heard many of the songs; however, this was not an indication of whether they could name the song title or the decade in which it was composed. Additionally, percentage data indicated that My Favorite Things and You Are My Sunshine were the most heard/recognized songs, Over the Rainbow was the most correctly named song title, and Five Foot Two, Eyes of Blue was the song most correctly identified by decade. Further results and discussion are included.
1988-11-17
NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if ntcestary and identify by block number) FIELD GROUP SUB-GROUP ,-.:image...ambiguity in the recognition of partially occluded objects. V 1 , t : ., , ’ -, L: \\ : _ 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21. ABSTRACT...constraints involved in the problem. More information can be found in [ 1 ]. Motion-based segmentation. Edge detection algorithms based on visual motion
Modularity of music: evidence from a case of pure amusia.
Piccirilli, M; Sciarma, T; Luzzi, S
2000-10-01
A case of pure amusia in a 20 year old left handed non-professional musician is reported. The patient showed an impairment of music abilities in the presence of normal processing of speech and environmental sounds. Furthermore, whereas recognition and production of melodic sequences were grossly disturbed, both the recognition and production of rhythm patterns were preserved. This selective breakdown pattern was produced by a focal lesion in the left superior temporal gyrus. This case thus suggests that not only linguistic and musical skills, but also melodic and rhythmic processing are independent of each other. This functional dissociation in the musical domain supports the hypothesis that music components have a modular organisation. Furthermore, there is the suggestion that amusia may be produced by a lesion located strictly in one hemisphere and that the superior temporal gyrus plays a crucial part in melodic processing.
Melodic Contour Identification and Music Perception by Cochlear Implant Users
Galvin, John J.; Fu, Qian-Jie; Shannon, Robert V.
2013-01-01
Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal-hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically-experienced CI users often performed as well as NH listeners, and MCI training in less experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as it occurs with hearing loss or an auditory prosthesis, training and experience can provide a considerable improvements in music perception and appreciation. PMID:19673835
Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.
Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L
2018-01-01
Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p = 0.04; d = 0.34), had increased response bias toward fear ( p = 0.03), and increased response time to facial emotions as a whole ( p = 0.02; d = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.
Gu, Xin; Liu, Bo; Liu, Ziye; Qi, Beier; Wang, Shuo; Dong, Ruijuan; Chen, Xueqing; Zhou, Qian
2017-12-01
The aim was to evaluate the development of music and lexical tone perception in Mandarin-speaking adult cochlear implant (CI) users over a period of 1 year. Prospective patient series. Tertiary hospital and research institute. Twenty five adult CI users, with ages ranging from 19 to 75 years old, participated in a year-long follow-up evaluation. There were also 40 normal hearing adult subjects who participated as a control group to provide the normal value range. Musical sounds in cochlear implants (Mu.S.I.C.) test battery was undertaken to evaluate music perception ability. Mandarin Tone Identification in Noise Test (M-TINT) was used to assess lexical tone recognition. The tests for CI users were completed at 1, 3, 6, and 12 months after the CI switch-on. Quantitative and statistical analysis of their results from music and tone perception tests. The performance of music perception and tone recognition both demonstrated an overall improvement in outcomes during the entire 1-year follow-up process. The increasing trends were obvious in the early period especially in the first 6 months after switch-on. There was a significant improvement in the melody discrimination (p < 0.01), timbre identification (p < 0.001), tone recognition in quiet (p < 0.0001), and in noise (p < 0.0001). Adult Mandarin-speaking CI users show an increasingly improved performance on music and tone perception during the 1-year follow-up. The improvement was the most prominent in the first 6 months of CI use. It is essential to strengthen the rehabilitation training within the first 6 months.
Separation of Singing Voice from Music Accompaniment for Monaural Recordings
2005-09-01
Directory: pub/tech-report/2005 File in pdf format: TR61.pdf Separation of Singing Voice from Music Accompaniment for Monaural Recordings Yipeng Li...Abstract Separating singing voice from music accompaniment is very useful in many applications, such as lyrics recognition and alignment, singer...identification, and music information retrieval. Although speech separation has been extensively studied for decades, singing voice separation has been little
Herff, Steffen A; Olsen, Kirk N; Dean, Roger T
2018-05-01
In many memory domains, a decrease in recognition performance between the first and second presentation of an object is observed as the number of intervening items increases. However, this effect is not universal. Within the auditory domain, this form of interference has been demonstrated in word and single-note recognition, but has yet to be substantiated using relatively complex musical material such as a melody. Indeed, it is becoming clear that music shows intriguing properties when it comes to memory. This study investigated how the number of intervening items influences memory for melodies. In Experiments 1, 2 and 3, one melody was presented per trial in a continuous recognition paradigm. After each melody, participants indicated whether they had heard the melody in the experiment before by responding "old" or "new." In Experiment 4, participants rated perceived familiarity for every melody without being told that melodies reoccur. In four experiments using two corpora of music, two different memory tasks, transposed and untransposed melodies and up to 195 intervening melodies, no sign of a disruptive effect from the number of intervening melodies beyond the first was observed. We propose a new "regenerative multiple representations" conjecture to explain why intervening items increase interference in recognition memory for most domains but not music. This conjecture makes several testable predictions and has the potential to strengthen our understanding of domain specificity in human memory, while moving one step closer to explaining the "paradox" that is memory for melody.
Cross, Kara; Flores, Roberto; Butterfield, Jacyln; Blackman, Melinda; Lee, Stephanie
2012-10-01
The study examined the effects of music therapy and dance/movement therapy on cognitively impaired and mild to moderately depressed older adults. Passive listening to music and active observation of dance accompanied by music were studied in relation to memory enhancement and relief of depressive symptoms in 100 elderly board and care residents. The Beck Depression Inventory and the Recognition Memory Test-Faces Inventory were administered to two groups (one group exposed to a live 30-min. session of musical dance observation, the other to 30 min. of pre-recorded music alone) before the intervention and measured again 3 and 10 days after the intervention. Scores improved for both groups on both measures following the interventions, but the group exposed to dance therapy had significantly lower Beck Depression scores that lasted longer. These findings suggest that active observation of Dance Movement Therapy could play a role in temporarily alleviating moderate depressive symptoms and some cognitive deficits in older adults.
Learning and liking an artificial musical system: Effects of set size and repeated exposure
Loui, Psyche; Wessel, David
2009-01-01
We report an investigation of humans' musical learning ability using a novel musical system. We designed an artificial musical system based on the Bohlen-Pierce scale, a scale very different from Western music. Melodies were composed from chord progressions in the new scale by applying the rules of a finite-state grammar. After exposing participants to sets of melodies, we conducted listening tests to assess learning, including recognition tests, generalization tests, and subjective preference ratings. In Experiment 1, participants were presented with 15 melodies 27 times each. Forced choice results showed that participants were able to recognize previously encountered melodies and generalize their knowledge to new melodies, suggesting internalization of the musical grammar. Preference ratings showed no differentiation among familiar, new, and ungrammatical melodies. In Experiment 2, participants were given 10 melodies 40 times each. Results showed superior recognition but unsuccessful generalization. Additionally, preference ratings were significantly higher for familiar melodies. Results from the two experiments suggest that humans can internalize the grammatical structure of a new musical system following exposure to a sufficiently large set size of melodies, but musical preference results from repeated exposure to a small number of items. This dissociation between grammar learning and preference will be further discussed. PMID:20151034
Learning and liking an artificial musical system: Effects of set size and repeated exposure.
Loui, Psyche; Wessel, David
2008-10-01
We report an investigation of humans' musical learning ability using a novel musical system. We designed an artificial musical system based on the Bohlen-Pierce scale, a scale very different from Western music. Melodies were composed from chord progressions in the new scale by applying the rules of a finite-state grammar. After exposing participants to sets of melodies, we conducted listening tests to assess learning, including recognition tests, generalization tests, and subjective preference ratings. In Experiment 1, participants were presented with 15 melodies 27 times each. Forced choice results showed that participants were able to recognize previously encountered melodies and generalize their knowledge to new melodies, suggesting internalization of the musical grammar.Preference ratings showed no differentiation among familiar, new, and ungrammatical melodies. In Experiment 2, participants were given 10 melodies 40 times each. Results showed superior recognition but unsuccessful generalization. Additionally, preference ratings were significantly higher for familiar melodies. Results from the two experiments suggest that humans can internalize the grammatical structure of a new musical system following exposure to a sufficiently large set size of melodies, but musical preference results from repeated exposure to a small number of items. This dissociation between grammar learning and preference will be further discussed.
The contribution of local features to familiarity judgments in music.
Bigand, Emmanuel; Gérard, Yannick; Molin, Paul
2009-07-01
The contributions of local and global features to object identification depend upon the context. For example, while local features play an essential role in identification of words and objects, the global features are more influential in face recognition. In order to evaluate the respective strengths of local and global features for face recognition, researchers usually ask participants to recognize human faces (famous or learned) in normal and scrambled pictures. In this paper, we address a similar issue in music. We present the results of an experiment in which musically untrained participants were asked to differentiate famous from unknown musical excerpts that were presented in normal or scrambled ways. Manipulating the size of the temporal window on which the scrambling procedure was applied allowed us to evaluate the minimal length of time necessary for participants to make a familiarity judgment. Quite surprisingly, the minimum duration for differentiation of famous from unknown pieces is extremely short. This finding highlights the contribution of very local features to music memory.
VanWeelden, Kimberly; Cevasco, Andrea M
2010-01-01
The purposes of the current study were to determine geriatric clients' recognition of 32 popular songs and songs from musicals by asking whether they: (a) had heard the songs before; (b) could "name the tune" of each song; and (c) list the decade that each song was composed. Additionally, comparisons were made between the geriatric clients' recognition of these songs and by music therapy students' recognition of the same, songs, based on data from an earlier study (VanWeelden, Juchniewicz, & Cevasco, 2008). Results found 90% or more of the geriatric clients had heard 28 of the 32 songs, 80% or more of the graduate students had heard 20 songs, and 80% of the undergraduates had heard 18 songs. The geriatric clients correctly identified 3 songs with 80% or more accuracy, which the graduate students also correctly identified, while the undergraduates identified 2 of the 3 same songs. Geriatric clients identified the decades of 3 songs with 50% or greater accuracy. Neither the undergraduate nor graduate students identified any songs by the correct decade with over 50% accuracy. Further results are discussed.
Kowialiewski, Benjamin; Majerus, Steve
2016-01-01
Several models in the verbal domain of short-term memory (STM) consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression). They were required to decide whether all items of the probe list matched those of the memory list (item condition) or whether the order of the items in the probe sequence matched the order in the memory list (order condition). In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes. PMID:27992565
Gorin, Simon; Kowialiewski, Benjamin; Majerus, Steve
2016-01-01
Several models in the verbal domain of short-term memory (STM) consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression). They were required to decide whether all items of the probe list matched those of the memory list (item condition) or whether the order of the items in the probe sequence matched the order in the memory list (order condition). In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes.
Brown, Laura S
2017-03-01
Children with autism spectrum disorder (ASD) often struggle with social skills, including the ability to perceive emotions based on facial expressions. Research evidence suggests that many individuals with ASD can perceive emotion in music. Examining whether music can be used to enhance recognition of facial emotion by children with ASD would inform development of music therapy interventions. The purpose of this study was to investigate the influence of music with a strong emotional valance (happy; sad) on children with ASD's ability to label emotions depicted in facial photographs, and their response time. Thirty neurotypical children and 20 children with high-functioning ASD rated expressions of happy, neutral, and sad in 30 photographs under two music listening conditions (sad music; happy music). During each music listening condition, participants rated the 30 images using a 7-point scale that ranged from very sad to very happy. Response time data were also collected across both conditions. A significant two-way interaction revealed that participants' ratings of happy and neutral faces were unaffected by music conditions, but sad faces were perceived to be sadder with sad music than with happy music. Across both conditions, neurotypical children rated the happy faces as happier and the sad faces as sadder than did participants with ASD. Response times of the neurotypical children were consistently shorter than response times of the children with ASD; both groups took longer to rate sad faces than happy faces. Response times of neurotypical children were generally unaffected by the valence of the music condition; however, children with ASD took longer to respond when listening to sad music. Music appears to affect perceptions of emotion in children with ASD, and perceptions of sad facial expressions seem to be more affected by emotionally congruent background music than are perceptions of happy or neutral faces. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients.
Gfeller, K; Christ, A; Knutson, J F; Witt, S; Murray, K T; Tyler, R S
2000-01-01
This paper describes the listening habits and musical enjoyment of postlingually deafened adults who use cochlear implants. Sixty-five implant recipients (35 females, 30 males) participated in a survey containing questions about musical background, prior involvement in music, and audiologic success with the implant in various listening circumstances. Responses were correlated with measures of cognition and speech recognition. Sixty-seven implant recipients completed daily diaries (7 consecutive days) in which they reported hours spent in specific music activities. Results indicate a wide range of success with music. In general, people enjoy music less postimplantation than prior to hearing loss. Musical enjoyment is influenced by the listening environment (e.g., a quiet room) and features of the music.
Music causes deterioration of source memory: evidence from normal ageing.
El Haj, Mohamad; Omigie, Diana; Clément, Sylvain
2014-01-01
Previous research has shown that music exposure can impair a wide variety of cognitive and behavioural performance. We investigated whether this is the case for source memory. Forty-one younger adults and 35 healthy elderly were required to retain the location in which pictures of coloured objects were displayed. On a subsequent recognition test they were required to decide whether the objects were displayed in the same location as before or not. Encoding took place (a) in silence, (b) while listening to street noise, or (c) while listening to Vivaldi's "Four Seasons". Recognition always took place during silence. A significant reduction in source memory was observed following music exposure, a reduction that was more pronounced for older adults than for younger adults. This pattern was significantly correlated with performance on an executive binding task. The exposure to music appeared to interfere with binding in working memory, worsening source recall.
Using singing to nurture children's hearing? A pilot study.
Welch, Graham F; Saunders, Jo; Edwards, Sian; Palmer, Zoe; Himonides, Evangelos; Knight, Julian; Mahon, Merle; Griffin, Susanna; Vickers, Deborah A
2015-09-01
This article reports a pilot study of the potential benefits of a sustained programme of singing activities on the musical behaviours and hearing acuity of young children with hearing impairment (HI). Twenty-nine children (n=12 HI and n=17 NH) aged between 5 and 7 years from an inner-city primary school in London participated, following appropriate ethical approval. The predominantly classroom-based programme was designed by colleagues from the UCL Institute of Education and UCL Ear Institute in collaboration with a multi-arts charity Creative Futures and delivered by an experienced early years music specialist weekly across two school terms. There was a particular emphasis on building a repertoire of simple songs with actions and allied vocal exploration. Musical learning was also supported by activities that drew on visual imagery for sound and that included simple notation and physical gesture. An overall impact assessment of the pilot programme embraced pre- and post-intervention measures of pitch discrimination, speech perception in noise and singing competency. Subsequent statistical data analyses suggest that the programme had a positive impact on participant children's singing range, particularly (but not only) for HI children with hearing aids, and also in their singing skills. HI children's pitch perception also improved measurably over time. Findings imply that all children, including those with HI, can benefit from regular and sustained access to age-appropriate musical activities.
Impaired perception of harmonic complexity in congenital amusia: a case study.
Reed, Catherine L; Cahn, Steven J; Cory, Christopher; Szaflarski, Jerzy P
2011-07-01
This study investigates whether congenital amusia (an inability to perceive music from birth) also impairs the perception of musical qualities that do not rely on fine-grained pitch discrimination. We established that G.G. (64-year-old male, age-typical hearing) met the criteria of congenital amusia and demonstrated music-specific deficits (e.g., language processing, intonation, prosody, fine-grained pitch processing, pitch discrimination, identification of discrepant tones and direction of pitch for tones in a series, pitch discrimination within scale segments, predictability of tone sequences, recognition versus knowing memory for melodies, and short-term memory for melodies). Next, we conducted tests of tonal fusion, harmonic complexity, and affect perception: recognizing timbre, assessing consonance and dissonance, and recognizing musical affect from harmony. G.G. displayed relatively unimpaired perception and production of environmental sounds, prosody, and emotion conveyed by speech compared with impaired fine-grained pitch perception, tonal sequence discrimination, and melody recognition. Importantly, G.G. could not perform tests of tonal fusion that do not rely on pitch discrimination: He could not distinguish concurrent notes, timbre, consonance/dissonance, simultaneous notes, and musical affect. Results indicate at least three distinct problems-one with pitch discrimination, one with harmonic simultaneity, and one with musical affect-and each has distinct consequences for music perception.
Aucouturier, Jean-Julien; Defreville, Boris; Pachet, François
2007-08-01
The "bag-of-frames" approach (BOF) to audio pattern recognition represents signals as the long-term statistical distribution of their local spectral features. This approach has proved nearly optimal for simulating the auditory perception of natural and human environments (or soundscapes), and is also the most predominent paradigm to extract high-level descriptions from music signals. However, recent studies show that, contrary to its application to soundscape signals, BOF only provides limited performance when applied to polyphonic music signals. This paper proposes to explicitly examine the difference between urban soundscapes and polyphonic music with respect to their modeling with the BOF approach. First, the application of the same measure of acoustic similarity on both soundscape and music data sets confirms that the BOF approach can model soundscapes to near-perfect precision, and exhibits none of the limitations observed in the music data set. Second, the modification of this measure by two custom homogeneity transforms reveals critical differences in the temporal and statistical structure of the typical frame distribution of each type of signal. Such differences may explain the uneven performance of BOF algorithms on soundscapes and music signals, and suggest that their human perception rely on cognitive processes of a different nature.
Musical and Verbal Memory in Alzheimer's Disease: A Study of Long-Term and Short-Term Memory
ERIC Educational Resources Information Center
Menard, Marie-Claude; Belleville, Sylvie
2009-01-01
Musical memory was tested in Alzheimer patients and in healthy older adults using long-term and short-term memory tasks. Long-term memory (LTM) was tested with a recognition procedure using unfamiliar melodies. Short-term memory (STM) was evaluated with same/different judgment tasks on short series of notes. Musical memory was compared to verbal…
ERIC Educational Resources Information Center
Stephenson, K. G.; Quintin, E. M.; South, M.
2016-01-01
While research regarding emotion recognition in ASD has focused primarily on social cues, musical stimuli also elicit strong emotional responses. This study extends and expands the few previous studies of response to music in ASD, measuring both psychophysiological and behavioral responses in younger children (ages 8-11) as well as older…
Neural correlates of audiovisual integration in music reading.
Nichols, Emily S; Grahn, Jessica A
2016-10-01
Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
How does the brain process music?
Warren, Jason
2008-02-01
The organisation of the musical brain is a major focus of interest in contemporary neuroscience. This reflects the increasing sophistication of tools (especially imaging techniques) to examine brain anatomy and function in health and disease, and the recognition that music provides unique insights into a number of aspects of nonverbal brain function. The emerging picture is complex but coherent, and moves beyond older ideas of music as the province of a single brain area or hemisphere to the concept of music as a 'whole-brain' phenomenon. Music engages a distributed set of cortical modules that process different perceptual, cognitive and emotional components with varying selectivity. 'Why' rather than 'how' the brain processes music is a key challenge for the future.
Weber, K
1977-01-01
Music is a structure ('Gestalt') in time. The recognition of disturbances of the perception of music enhances the knowledge of disorders of perception of time. Disturbances of perception of music and time in experimental psychoses (psilocybine) are discussed in relation to the studies by Piaget on the development of the notion of time in childhood. The results allow a new interpretation of the disturbances of the perception of time in diencephalic disorders as described in the literature.
Memory for music in Alzheimer's disease: unforgettable?
Baird, Amee; Samson, Séverine
2009-03-01
The notion that memory for music can be preserved in patients with Alzheimer's Disease (AD) has been raised by a number of case studies. In this paper, we review the current research examining musical memory in patients with AD. In keeping with models of memory described in the non-musical domain, we propose that various forms of musical memory exist, and may be differentially impaired in AD, reflecting the pattern of neuropathological changes associated with the condition. Our synthesis of this literature reveals a dissociation between explicit and implicit musical memory functions. Implicit, specifically procedural musical memory, or the ability to play a musical instrument, can be spared in musicians with AD. In contrast, explicit musical memory, or the recognition of familiar or unfamiliar melodies, is typically impaired. Thus, the notion that music is unforgettable in AD is not wholly supported. Rather, it appears that the ability to play a musical instrument may be unforgettable in some musicians with AD.
Investigation on the music perception skills of Italian children with cochlear implants.
Scorpecci, Alessandro; Zagari, Felicia; Mari, Giorgia; Giannantonio, Sara; D'Alatri, Lucia; Di Nardo, Walter; Paludetti, Gaetano
2012-10-01
To compare the music perception skills of a group of Italian-speaking children with cochlear implants to those of a group of normal hearing children; to analyze possible correlations between implanted children's musical skills and their demographics, clinical characteristics, phonological perception, and speech recognition and production abilities. 18 implanted children aged 5-12 years and a reference group of 23 normal-hearing subjects with typical language development were enrolled. Both groups received a melody identification test and a song (i.e. original version) identification test. The implanted children also received a test battery aimed at assessing speech recognition, speech production and phoneme discrimination. The implanted children scored significantly worse than the normal hearing subjects in both musical tests. In the cochlear implant group, phoneme discrimination abilities were significantly correlated with both melody and song identification skills, and length of device use was significantly correlated with song identification skills. Experience with device use and phonological perception had a moderate-to-strong correlation to implanted children's music perception abilities. In the light of these findings, it is reasonable to assume that a rehabilitation program specifically aimed at improving phonological perception could help pediatric cochlear implant recipients better understand the basic elements of music; moreover, a training aimed at improving the comprehension of the spectral elements of music could enhance implanted children's phonological skills. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Golden, Hannah L; Clark, Camilla N; Nicholas, Jennifer M; Cohen, Miriam H; Slattery, Catherine F; Paterson, Ross W; Foulkes, Alexander J M; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D
2017-01-01
Despite much recent interest in music and dementia, music perception has not been widely studied across dementia syndromes using an information processing approach. Here we addressed this issue in a cohort of 30 patients representing major dementia syndromes of typical Alzheimer's disease (AD, n = 16), logopenic aphasia (LPA, an Alzheimer variant syndrome; n = 5), and progressive nonfluent aphasia (PNFA; n = 9) in relation to 19 healthy age-matched individuals. We designed a novel neuropsychological battery to assess perception of musical patterns in the dimensions of pitch and temporal information (requiring detection of notes that deviated from the established pattern based on local or global sequence features) and musical scene analysis (requiring detection of a familiar tune within polyphonic harmony). Performance on these tests was referenced to generic auditory (timbral) deviance detection and recognition of familiar tunes and adjusted for general auditory working memory performance. Relative to healthy controls, patients with AD and LPA had group-level deficits of global pitch (melody contour) processing while patients with PNFA as a group had deficits of local (interval) as well as global pitch processing. There was substantial individual variation within syndromic groups. Taking working memory performance into account, no specific deficits of musical temporal processing, timbre processing, musical scene analysis, or tune recognition were identified. The findings suggest that particular aspects of music perception such as pitch pattern analysis may open a window on the processing of information streams in major dementia syndromes. The potential selectivity of musical deficits for particular dementia syndromes and particular dimensions of processing warrants further systematic investigation.
Humans Rapidly Learn Grammatical Structure in a New Musical Scale
Loui, Psyche; Wessel, David L.; Hudson Kam, Carla L.
2010-01-01
Knowledge of musical rules and structures has been reliably demonstrated in humans of different ages, cultures, and levels of music training, and has been linked to our musical preferences. However, how humans acquire knowledge of and develop preferences for music remains unknown. The present study shows that humans rapidly develop knowledge and preferences when given limited exposure to a new musical system. Using a non-traditional, unfamiliar musical scale (Bohlen-Pierce scale), we created finite-state musical grammars from which we composed sets of melodies. After 25–30 min of passive exposure to the melodies, participants showed extensive learning as characterized by recognition, generalization, and sensitivity to the event frequencies in their given grammar, as well as increased preference for repeated melodies in the new musical system. Results provide evidence that a domain-general statistical learning mechanism may account for much of the human appreciation for music. PMID:20740059
Humans Rapidly Learn Grammatical Structure in a New Musical Scale.
Loui, Psyche; Wessel, David L; Hudson Kam, Carla L
2010-06-01
Knowledge of musical rules and structures has been reliably demonstrated in humans of different ages, cultures, and levels of music training, and has been linked to our musical preferences. However, how humans acquire knowledge of and develop preferences for music remains unknown. The present study shows that humans rapidly develop knowledge and preferences when given limited exposure to a new musical system. Using a non-traditional, unfamiliar musical scale (Bohlen-Pierce scale), we created finite-state musical grammars from which we composed sets of melodies. After 25-30 min of passive exposure to the melodies, participants showed extensive learning as characterized by recognition, generalization, and sensitivity to the event frequencies in their given grammar, as well as increased preference for repeated melodies in the new musical system. Results provide evidence that a domain-general statistical learning mechanism may account for much of the human appreciation for music.
Instrument-independent analysis of music by means of the continuous wavelet transform
NASA Astrophysics Data System (ADS)
Olmo, Gabriella; Dovis, Fabio; Benotto, Paolo; Calosso, Claudio; Passaro, Pierluigi
1999-10-01
This paper deals with the problem of automatic recognition of music. Segments of digitized music are processed by means of a Continuous Wavelet Transform, properly chosen so as to match the spectral characteristics of the signal. In order to achieve a good time-scale representation of the signal components a novel wavelet has been designed suited to the musical signal features. particular care has been devoted towards an efficient implementation, which operates in the frequency domain, and includes proper segmentation and aliasing reduction techniques to make the analysis of long signals feasible. The method achieves very good performance in terms of both time and frequency selectivity, and can yield the estimate and the localization in time of both the fundamental frequency and the main harmonics of each tone. The analysis is used as a preprocessing step for a recognition algorithm, which we show to be almost independent on the instrument reproducing the sounds. Simulations are provided to demonstrate the effectiveness of the proposed method.
Optical music recognition on the International Music Score Library Project
NASA Astrophysics Data System (ADS)
Raphael, Christopher; Jin, Rong
2013-12-01
A system is presented for optical recognition of music scores. The system processes a document page in three main phases. First it performs a hierarchical decomposition of the page, identifying systems, staves and measures. The second phase, which forms the heart of the system, interprets each measure found in the previous phase as a collection of non-overlapping symbols including both primitive symbols (clefs, rests, etc.) with fixed templates, and composite symbols (chords, beamed groups, etc.) constructed through grammatical composition of primitives (note heads, ledger lines, beams, etc.). This phase proceeds by first building separate top-down recognizers for the symbols of interest. Then, it resolves the inevitable overlap between the recognized symbols by exploring the possible assignment of overlapping regions, seeking globally optimal and grammatically consistent explanations. The third phase interprets the recognized symbols in terms of pitch and rhythm, focusing on the main challenge of rhythm. We present results that compare our system to the leading commercial OMR system using MIDI ground truth for piano music.
Unforgettable film music: The role of emotion in episodic long-term memory for music
Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O
2008-01-01
Background Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Results Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Conclusion Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval. PMID:18505596
Unforgettable film music: the role of emotion in episodic long-term memory for music.
Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O
2008-05-28
Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.
ERIC Educational Resources Information Center
Spahr, Anthony J.; Litvak, Leonid M.; Dorman, Michael F.; Bohanan, Ashley R.; Mishra, Lakshmi N.
2008-01-01
Purpose: To determine why, in a pilot study, only 1 of 11 cochlear implant listeners was able to reliably identify a frequency-to-electrode map where the intervals of a familiar melody were played on the correct musical scale. The authors sought to validate their method and to assess the effect of pitch strength on musical scale recognition in…
Data-Driven Process Discovery: A Discrete Time Algebra for Relational Signal Analysis
1996-12-01
would also like to thank Dr. Mark Oxley for his assistance in developing this abstract algebra and the mathematical notation found herein. Lastly, I... Mathematical Result.. 4-13 4.4. Demostration of Coefficient Signature Additon ........................ 4-14 4.5. Multivariate Relational Discovery...spaces with the recognition of cues in a specific space" [21]. Up to now, most of the Artificial Intelligence (Al) ’discovery’ work has emphasized one
Burnout: How to Spot It, How to Avoid It.
ERIC Educational Resources Information Center
Hamann, Donald L.
1990-01-01
Observes that master music teachers' intensity and commitment make them good candidates for burnout. Reports on contributing factors mentioned by music teachers: lack of recognition and support, unclear goals, poor curricular coordination, and poor working conditions. Offers suggestions for combating burnout including exercise, networking with…
Melodic contour identification by cochlear implant listeners.
Galvin, John J; Fu, Qian-Jie; Nogaki, Geraldine
2007-06-01
While the cochlear implant provides many deaf patients with good speech understanding in quiet, music perception and appreciation with the cochlear implant remains a major challenge for most cochlear implant users. The present study investigated whether a closed-set melodic contour identification (MCI) task could be used to quantify cochlear implant users' ability to recognize musical melodies and whether MCI performance could be improved with moderate auditory training. The present study also compared MCI performance with familiar melody identification (FMI) performance, with and without MCI training. For the MCI task, test stimuli were melodic contours composed of 5 notes of equal duration whose frequencies corresponded to musical intervals. The interval between successive notes in each contour was varied between 1 and 5 semitones; the "root note" of the contours was also varied (A3, A4, and A5). Nine distinct musical patterns were generated for each interval and root note condition, resulting in a total of 135 musical contours. The identification of these melodic contours was measured in 11 cochlear implant users. FMI was also evaluated in the same subjects; recognition of 12 familiar melodies was tested with and without rhythm cues. MCI was also trained in 6 subjects, using custom software and melodic contours presented in a different frequency range from that used for testing. Results showed that MCI recognition performance was highly variable among cochlear implant users, ranging from 14% to 91% correct. For most subjects, MCI performance improved as the number of semitones between successive notes was increased; performance was slightly lower for the A3 root note condition. Mean FMI performance was 58% correct when rhythm cues were preserved and 29% correct when rhythm cues were removed. Statistical analyses revealed no significant correlation between MCI performance and FMI performance (with or without rhythmic cues). However, MCI performance was significantly correlated with vowel recognition performance; FMI performance was not correlated with cochlear implant subjects' phoneme recognition performance. Preliminary results also showed that the MCI training improved all subjects' MCI performance; the improved MCI performance also generalized to improved FMI performance. Preliminary data indicate that the closed-set MCI task is a viable approach toward quantifying an important component of cochlear implant users' music perception. The improvement in MCI performance and generalization to FMI performance with training suggests that MCI training may be useful for improving cochlear implant users' music perception and appreciation; such training may be necessary to properly evaluate patient performance, as acute measures may underestimate the amount of musical information transmitted by the cochlear implant device and received by cochlear implant listeners.
Selective preservation of the beat in apperceptive music agnosia: a case study.
Baird, Amee D; Walker, David G; Biggs, Vivien; Robinson, Gail A
2014-04-01
Music perception involves processing of melodic, temporal and emotional dimensions that have been found to dissociate in healthy individuals and after brain injury. Two components of the temporal dimension have been distinguished, namely rhythm and metre. We describe an 18 year old male musician 'JM' who showed apperceptive music agnosia with selectively preserved metre perception, and impaired recognition of sad and peaceful music relative to age and music experience matched controls after resection of a right temporoparietal tumour. Two months post-surgery JM underwent a comprehensive neuropsychological evaluation including assessment of his music perception abilities using the Montreal Battery for Evaluation of Amusia (MBEA, Peretz, Champod, & Hyde, 2003). He also completed several experimental tasks to explore his ability to recognise famous songs and melodies, emotions portrayed by music and a broader range of environmental sounds. Five age-, gender-, education- and musical experienced-matched controls were administered the same experimental tasks. JM showed selective preservation of metre perception, with impaired performances compared to controls and scoring below the 5% cut-off on all MBEA subtests, except for the metric condition. He could identify his favourite songs and environmental sounds. He showed impaired recognition of sad and peaceful emotions portrayed in music relative to controls but intact ability to identify happy and scary music. This case study contributes to the scarce literature documenting a dissociation between rhythmic and metric processing, and the rare observation of selectively preserved metric interpretation in the context of apperceptive music agnosia. It supports the notion that the anterior portion of the superior temporal gyrus (STG) plays a role in metric processing and provides the novel observation that selectively preserved metre is sufficient to identify happy and scary, but not sad or peaceful emotions portrayed in music. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie
2014-01-01
Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Sex Differences in Music: A Female Advantage at Recognizing Familiar Melodies.
Miles, Scott A; Miranda, Robbin A; Ullman, Michael T
2016-01-01
Although sex differences have been observed in various cognitive domains, there has been little work examining sex differences in the cognition of music. We tested the prediction that women would be better than men at recognizing familiar melodies, since memories of specific melodies are likely to be learned (at least in part) by declarative memory, which shows female advantages. Participants were 24 men and 24 women, with half musicians and half non-musicians in each group. The two groups were matched on age, education, and various measures of musical training. Participants were presented with well-known and novel melodies, and were asked to indicate their recognition of familiar melodies as rapidly as possible. The women were significantly faster than the men in responding, with a large effect size. The female advantage held across musicians and non-musicians, and across melodies with and without commonly associated lyrics, as evidenced by an absence of interactions between sex and these factors. Additionally, the results did not seem to be explained by sex differences in response biases, or in basic motor processes as tested in a control task. Though caution is warranted given that this is the first study to examine sex differences in familiar melody recognition, the results are consistent with the hypothesis motivating our prediction, namely that declarative memory underlies knowledge about music (particularly about familiar melodies), and that the female advantage at declarative memory may thus lead to female advantages in music cognition (particularly at familiar melody recognition). Additionally, the findings argue against the view that female advantages at tasks involving verbal (or verbalizable) material are due solely to a sex difference specific to the verbal domain. Further, the results may help explain previously reported cognitive commonalities between music and language: since declarative memory also underlies language, such commonalities may be partly due to a common dependence on this memory system. More generally, because declarative memory is well studied at many levels, evidence that music cognition depends on this system may lead to a powerful research program generating a wide range of novel predictions for the neurocognition of music, potentially advancing the field.
Instrument classification in polyphonic music based on timbre analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong
2001-07-01
While most previous work on musical instrument recognition is focused on the classification of single notes in monophonic music, a scheme is proposed in this paper for the distinction of instruments in continuous music pieces which may contain one or more kinds of instruments. Highlights of the system include music segmentation into notes, harmonic partial estimation in polyphonic sound, note feature calculation and normalization, note classification using a set of neural networks, and music piece categorization with fuzzy logic principles. Example outputs of the system are `the music piece is 100% guitar (with 90% likelihood)' and `the music piece is 60% violin and 40% piano, thus a violin/piano duet'. The system has been tested with twelve kinds of musical instruments, and very promising experimental results have been obtained. An accuracy of about 80% is achieved, and the number can be raised to 90% if misindexings within the same instrument family are tolerated (e.g. cello, viola and violin). A demonstration system for musical instrument classification and music timbre retrieval is also presented.
Musical anhedonia: selective loss of emotional experience in listening to music.
Satoh, Masayuki; Nakase, Taizen; Nagata, Ken; Tomimoto, Hidekazu
2011-10-01
Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.
Listeners Remember Music They Like
ERIC Educational Resources Information Center
Stalinski, Stephanie M.; Schellenberg, E. Glenn
2013-01-01
Emotions have important and powerful effects on cognitive processes. Although it is well established that memory influences liking, we sought to document whether liking influences memory. A series of 6 experiments examined whether liking is related to recognition memory for novel music excerpts. In the general method, participants listened to a…
Perceptually Salient Regions of the Modulation Power Spectrum for Musical Instrument Identification.
Thoret, Etienne; Depalle, Philippe; McAdams, Stephen
2017-01-01
The ability of a listener to recognize sound sources, and in particular musical instruments from the sounds they produce, raises the question of determining the acoustical information used to achieve such a task. It is now well known that the shapes of the temporal and spectral envelopes are crucial to the recognition of a musical instrument. More recently, Modulation Power Spectra (MPS) have been shown to be a representation that potentially explains the perception of musical instrument sounds. Nevertheless, the question of which specific regions of this representation characterize a musical instrument is still open. An identification task was applied to two subsets of musical instruments: tuba, trombone, cello, saxophone, and clarinet on the one hand, and marimba, vibraphone, guitar, harp, and viola pizzicato on the other. The sounds were processed with filtered spectrotemporal modulations with 2D Gaussian windows. The most relevant regions of this representation for instrument identification were determined for each instrument and reveal the regions essential for their identification. The method used here is based on a "molecular approach," the so-called bubbles method. Globally, the instruments were correctly identified and the lower values of spectrotemporal modulations are the most important regions of the MPS for recognizing instruments. Interestingly, instruments that were confused with each other led to non-overlapping regions and were confused when they were filtered in the most salient region of the other instrument. These results suggest that musical instrument timbres are characterized by specific spectrotemporal modulations, information which could contribute to music information retrieval tasks such as automatic source recognition.
A spiral model of musical decision-making.
Bangert, Daniel; Schubert, Emery; Fabian, Dorottya
2014-01-01
This paper describes a model of how musicians make decisions about performing notated music. The model builds on psychological theories of decision-making and was developed from empirical studies of Western art music performance that aimed to identify intuitive and deliberate processes of decision-making, a distinction consistent with dual-process theories of cognition. The model proposes that the proportion of intuitive (Type 1) and deliberate (Type 2) decision-making processes changes with increasing expertise and conceptualizes this change as movement along a continually narrowing upward spiral where the primary axis signifies principal decision-making type and the vertical axis marks level of expertise. The model is intended to have implications for the development of expertise as described in two main phases. The first is movement from a primarily intuitive approach in the early stages of learning toward greater deliberation as analytical techniques are applied during practice. The second phase occurs as deliberate decisions gradually become automatic (procedural), increasing the role of intuitive processes. As a performer examines more issues or reconsiders decisions, the spiral motion toward the deliberate side and back to the intuitive is repeated indefinitely. With increasing expertise, the spiral tightens to signify greater control over decision type selection. The model draws on existing theories, particularly Evans' (2011) Intervention Model of dual-process theories, Cognitive Continuum Theory Hammond et al. (1987), Hammond (2007), Baylor's (2001) U-shaped model for the development of intuition by level of expertise. By theorizing how musical decision-making operates over time and with increasing expertise, this model could be used as a framework for future research in music performance studies and performance science more generally.
A spiral model of musical decision-making
Bangert, Daniel; Schubert, Emery; Fabian, Dorottya
2014-01-01
This paper describes a model of how musicians make decisions about performing notated music. The model builds on psychological theories of decision-making and was developed from empirical studies of Western art music performance that aimed to identify intuitive and deliberate processes of decision-making, a distinction consistent with dual-process theories of cognition. The model proposes that the proportion of intuitive (Type 1) and deliberate (Type 2) decision-making processes changes with increasing expertise and conceptualizes this change as movement along a continually narrowing upward spiral where the primary axis signifies principal decision-making type and the vertical axis marks level of expertise. The model is intended to have implications for the development of expertise as described in two main phases. The first is movement from a primarily intuitive approach in the early stages of learning toward greater deliberation as analytical techniques are applied during practice. The second phase occurs as deliberate decisions gradually become automatic (procedural), increasing the role of intuitive processes. As a performer examines more issues or reconsiders decisions, the spiral motion toward the deliberate side and back to the intuitive is repeated indefinitely. With increasing expertise, the spiral tightens to signify greater control over decision type selection. The model draws on existing theories, particularly Evans’ (2011) Intervention Model of dual-process theories, Cognitive Continuum Theory Hammond et al. (1987), Hammond (2007), Baylor’s (2001) U-shaped model for the development of intuition by level of expertise. By theorizing how musical decision-making operates over time and with increasing expertise, this model could be used as a framework for future research in music performance studies and performance science more generally. PMID:24795673
Struggling Musicians: Implications of the (Hegelian) Philosophy of Recognition for Music Education
ERIC Educational Resources Information Center
Väkevä, Lauri
2016-01-01
Charles Taylor has argued that recognition is a vital human need. This essay discusses recognition as a philosophical concept, following a line of argumentation that can be traced back to Hegel's early philosophy. An important premise of this tradition is that because a subject's freedom is conditioned by other subjects, individual agency cannot…
Emotion recognition based on physiological changes in music listening.
Kim, Jonghwa; André, Elisabeth
2008-12-01
Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.
Cascading reminiscence bumps in popular music.
Krumhansl, Carol Lynne; Zupnick, Justin Adam
2013-10-01
Autobiographical memories are disproportionately recalled for events in late adolescence and early adulthood, a phenomenon called the reminiscence bump. Previous studies on music have found autobiographical memories and life-long preferences for music from this period. In the present study, we probed young adults' personal memories associated with top hits over 5-and-a-half decades, as well as the context of their memories and their recognition of, preference for, quality judgments of, and emotional reactions to that music. All these measures showed the typical increase for music released during the two decades of their lives. Unexpectedly, we found that the same measures peaked for the music of participants' parents' generation. This finding points to the impact of music in childhood and suggests that these results reflect the prevalence of music in the home environment. An earlier peak occurred for 1960s music, which may be explained by its quality or by its transmission through two generations. We refer to this pattern of musical cultural transmission over generations as cascading reminiscence bumps.
Conveying the concept of movement in music: An event-related brain potential study.
Zhou, Linshu; Jiang, Cunmei; Wu, Yingying; Yang, Yufang
2015-10-01
This study on event-related brain potential investigated whether music can convey the concept of movement. Using a semantic priming paradigm, natural musical excerpts were presented to non-musicians, followed by semantically congruent or incongruent pictures that depicted objects either in motion or at rest. The priming effects were tested in object decision and implicit recognition tasks to distinguish the effects of automatic conceptual activation from response competition. Results showed that in both tasks, pictures that were incongruent to preceding musical excerpts elicited larger N400 than congruent pictures, suggesting that music can prime the representations of movement concepts. Results of the multiple regression analysis showed that movement expression could be well predicted by specific acoustic and musical features, indicating the associations between music per se and the processing of iconic musical meaning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Memory for tonal pitches: a music-length effect hypothesis.
Akiva-Kabiri, Lilach; Vecchi, Tomaso; Granot, Roni; Basso, Demis; Schön, Daniele
2009-07-01
One of the most studied effects of verbal working memory (WM) is the influence of the length of the words that compose the list to be remembered. This work aims to investigate the nature of musical WM by replicating the word length effect in the musical domain. Length and rate of presentation were manipulated in a recognition task of tone sequences. Results showed significant effects for both factors (length and presentation rate) as well as their interaction, suggesting the existence of different strategies (e.g., chunking and rehearsal) for the immediate memory of musical information, depending upon the length of the sequences.
Image-algebraic design of multispectral target recognition algorithms
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
1994-06-01
In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.
Daykin, Norma; de Viggiani, Nick; Pilkington, Paul; Moriarty, Yvonne
2013-06-01
Youth justice is an important public health issue. There is growing recognition of the need to adopt effective, evidence-based strategies for working with young offenders. Music interventions may be particularly well suited to addressing risk factors in young people and reducing juvenile crime. This systematic review of international research seeks to contribute to the evidence base on the impact of music making on the health, well-being and behaviour of young offenders and those considered at risk of offending. It examines outcomes of music making identified in quantitative research and discusses theories from qualitative research that might help to understand the impact of music making in youth justice settings.
Music Listening--The Classical Period (1720-1815), Music: 5635.793.
ERIC Educational Resources Information Center
Pearl, Jesse; Carter, Raymond
This 9-week, Quinmester course of study is designed to teach the principal types of vocal, instrumental, and operatic compositions of the classical period through listening to the styles of different composers and acquiring recognition of their works, as well as through developing fastidious listening habits. The course is intended for those…
Fuzzy recognition of noncompact musical objects
NASA Astrophysics Data System (ADS)
Cristobal Salas, Alfredo; Tchernykh, Andrei
1997-03-01
This article describes and compares some techniques to extract attributes from black and white images which contain musical objects. The inertia moment, the central moments and the wavelet transform methods are used to describe the images. Two supervised neural networks are applied to classify the images: backpropagation and fuzzy backpropagation. The results are compared.
Music Listening--Romantic Period (1815-1914), Music: 5635.794.
ERIC Educational Resources Information Center
Pearl, Jesse; Carter, Raymond
This secondary level Quinmester course is designed to teach the principal types of vocal, instrumental, and operatic compositions of the Romantic period through listening to the styles of different composers and acquiring recognition of their works. The course is intended for students who have participated in fine or performing arts and for pupils…
Learning high-level features for chord recognition using Autoencoder
NASA Astrophysics Data System (ADS)
Phongthongloa, Vilailukkana; Kamonsantiroj, Suwatchai; Pipanmaekaporn, Luepol
2016-07-01
Chord transcription is valuable to do by itself. It is known that the manual transcription of chords is very tiresome, time-consuming. It requires, moreover, musical knowledge. Automatic chord recognition has recently attracted a number of researches in the Music Information Retrieval field. It has known that a pitch class profile (PCP) is the commonly signal representation of musical harmonic analysis. However, the PCP may contain additional non-harmonic noise such as harmonic overtones and transient noise. The problem of non-harmonic might be generating the sound energy in term of frequency more than the actual notes of the respective chord. Autoencoder neural network may be trained to learn a mapping from low level feature to one or more higher-level representation. These high-level representations can explain dependencies of the inputs and reduce the effect of non-harmonic noise. Then these improve features are fed into neural network classifier. The proposed high-level musical features show 80.90% of accuracy. The experimental results have shown that the proposed approach can achieve better performance in comparison with other based method.
Kim, Sujin; Blake, Randolph; Lee, Minyoung; Kim, Chai-Youn
2017-01-01
Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3-19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On "pitch-congruent" trials, participants heard an auditory melody that was congruent in pitch with the visual score, on "pitch-incongruent" trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on "melody-incongruent" trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score.
Kim, Sujin; Blake, Randolph; Lee, Minyoung; Kim, Chai-Youn
2017-01-01
Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3–19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On “pitch-congruent” trials, participants heard an auditory melody that was congruent in pitch with the visual score, on “pitch-incongruent” trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on “melody-incongruent” trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score. PMID:28380058
NASA Astrophysics Data System (ADS)
Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc
2012-12-01
In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.
Listen, Learn, Like! Dorsolateral Prefrontal Cortex Involved in the Mere Exposure Effect in Music
Green, Anders C.; Bærentsen, Klaus B.; Stødkilde-Jørgensen, Hans; Roepstorff, Andreas; Vuust, Peter
2012-01-01
We used functional magnetic resonance imaging to investigate the neural basis of the mere exposure effect in music listening, which links previous exposure to liking. Prior to scanning, participants underwent a learning phase, where exposure to melodies was systematically varied. During scanning, participants rated liking for each melody and, later, their recognition of them. Participants showed learning effects, better recognising melodies heard more often. Melodies heard most often were most liked, consistent with the mere exposure effect. We found neural activations as a function of previous exposure in bilateral dorsolateral prefrontal and inferior parietal cortex, probably reflecting retrieval and working memory-related processes. This was despite the fact that the task during scanning was to judge liking, not recognition, thus suggesting that appreciation of music relies strongly on memory processes. Subjective liking per se caused differential activation in the left hemisphere, of the anterior insula, the caudate nucleus, and the putamen. PMID:22548168
Listen, learn, like! Dorsolateral prefrontal cortex involved in the mere exposure effect in music.
Green, Anders C; Bærentsen, Klaus B; Stødkilde-Jørgensen, Hans; Roepstorff, Andreas; Vuust, Peter
2012-01-01
We used functional magnetic resonance imaging to investigate the neural basis of the mere exposure effect in music listening, which links previous exposure to liking. Prior to scanning, participants underwent a learning phase, where exposure to melodies was systematically varied. During scanning, participants rated liking for each melody and, later, their recognition of them. Participants showed learning effects, better recognising melodies heard more often. Melodies heard most often were most liked, consistent with the mere exposure effect. We found neural activations as a function of previous exposure in bilateral dorsolateral prefrontal and inferior parietal cortex, probably reflecting retrieval and working memory-related processes. This was despite the fact that the task during scanning was to judge liking, not recognition, thus suggesting that appreciation of music relies strongly on memory processes. Subjective liking per se caused differential activation in the left hemisphere, of the anterior insula, the caudate nucleus, and the putamen.
The program complex for vocal recognition
NASA Astrophysics Data System (ADS)
Konev, Anton; Kostyuchenko, Evgeny; Yakimuk, Alexey
2017-01-01
This article discusses the possibility of applying the algorithm of determining the pitch frequency for the note recognition problems. Preliminary study of programs-analogues were carried out for programs with function “recognition of the music”. The software package based on the algorithm for pitch frequency calculation was implemented and tested. It was shown that the algorithm allows recognizing the notes in the vocal performance of the user. A single musical instrument, a set of musical instruments, and a human voice humming a tune can be the sound source. The input file is initially presented in the .wav format or is recorded in this format from a microphone. Processing is performed by sequentially determining the pitch frequency and conversion of its values to the note. According to test results, modification of algorithms used in the complex was planned.
Steinke, W R; Cuddy, L L; Jakobson, L S
2001-07-01
This study describes an amateur musician, KB, who became amusic following a right-hemisphere stroke. A series of assessments conducted post-stroke revealed that KB functioned in the normal range for most verbal skills. However, compared with controls matched in age and music training, KB showed severe loss of pitch and rhythmic processing abilities. His ability to recognise and identify familiar instrumental melodies was also lost. Despite these deficits, KB performed remarkably well when asked to recognise and identify familiar song melodies presented without accompanying lyrics. This dissociation between the ability to recognise/identify song vs. instrumental melodies was replicated across different sets of musical materials, including newly learned melodies. Analyses of the acoustical and musical features of song and instrumental melodies discounted an explanation of the dissociation based on these features alone. Rather, the results suggest a functional dissociation resulting from a focal brain lesion. We propose that, in the case of song melodies, there remains sufficient activation in KB's melody analysis system to coactivate an intact representation of both associative information and the lyrics in the speech lexicon, making recognition and identification possible. In the case of instrumental melodies, no such associative processes exist; thus recognition and identification do not occur.
NASA Astrophysics Data System (ADS)
Costache, G. N.; Gavat, I.
2004-09-01
Along with the aggressive growing of the amount of digital data available (text, audio samples, digital photos and digital movies joined all in the multimedia domain) the need for classification, recognition and retrieval of this kind of data became very important. In this paper will be presented a system structure to handle multimedia data based on a recognition perspective. The main processing steps realized for the interesting multimedia objects are: first, the parameterization, by analysis, in order to obtain a description based on features, forming the parameter vector; second, a classification, generally with a hierarchical structure to make the necessary decisions. For audio signals, both speech and music, the derived perceptual features are the melcepstral (MFCC) and the perceptual linear predictive (PLP) coefficients. For images, the derived features are the geometric parameters of the speaker mouth. The hierarchical classifier consists generally in a clustering stage, based on the Kohonnen Self-Organizing Maps (SOM) and a final stage, based on a powerful classification algorithm called Support Vector Machines (SVM). The system, in specific variants, is applied with good results in two tasks: the first, is a bimodal speech recognition which uses features obtained from speech signal fused to features obtained from speaker's image and the second is a music retrieval from large music database.
Music listening for maintaining attention of older adults with cognitive impairments.
Gregory, Dianne
2002-01-01
Twelve older adults with cognitive impairments who were participants in weekly community-based group music therapy sessions, 6 older adults in an Alzheimer's caregivers' group, and 6 college student volunteers listened to a 3.5 minute prepared audiotape of instrumental excerpts of patriotic selections. The tape consisted of 7 excerpts ranging from 18 s to 34 s in duration. Each music excerpt was followed by a 7-9 s period of silence, a "wait" excerpt. Listeners were instructed to move a Continuous Response Digital Interface (CRDI) to the name of the music excerpt depicted on the CRDI overlay when they heard a music excerpt. Likewise, they were instructed to move the dial to the word "WAIT" when there was no music. They were also instructed to maintain the dial position for the duration of each music or silence excerpt. Statistical analysis indicated no significant differences between the caregivers' and the college students' group means for total dial changes, correct and incorrect recognitions, correct and incorrect responses to silence excerpts, and reaction times. The mean scores of these 2 groups were combined and compared with the mean scores of the group of elderly adults with cognitive impairments. The mean total dial changes were significantly lower for the listeners with cognitive impairments, resulting in significant differences in all of the other response categories except incorrect recognitions. In addition, their mean absence of response to silence excerpts was significantly higher than their mean absence of responding to music excerpts. Their mean reaction time was significantly slower than the comparison group's reaction time. To evaluate training effects, 10 of the original 12 music therapy participants repeated the listening task with assistance from the therapist (treatment) immediately following the first listening (baseline). A week later the order was reversed for the 2 listening trials. Statistical and graphic analysis of responses between first and second baseline responses indicate significant improvement in responses to silence and music excerpts over the 2 sessions. Applications of the findings to music listening interventions for maintaining attention, eliciting social interaction between clients or caregivers and their patients, and evaluating this population's affective responses to music are discussed.
The effects of emotion on memory for music and vocalisations.
Aubé, William; Peretz, Isabelle; Armony, Jorge L
2013-01-01
Music is a powerful tool for communicating emotions which can elicit memories through associative mechanisms. However, it is currently unknown whether emotion can modulate memory for music without reference to a context or personal event. We conducted three experiments to investigate the effect of basic emotions (fear, happiness, and sadness) on recognition memory for music, using short, novel stimuli explicitly created for research purposes, and compared them with nonlinguistic vocalisations. Results showed better memory accuracy for musical clips expressing fear and, to some extent, happiness. In the case of nonlinguistic vocalisations we confirmed a memory advantage for all emotions tested. A correlation between memory accuracy for music and vocalisations was also found, particularly in the case of fearful expressions. These results confirm that emotional expressions, particularly fearful ones, conveyed by music can influence memory as has been previously shown for other forms of expressions, such as faces and vocalisations.
Laukka, Petri; Eerola, Tuomas; Thingujam, Nutankumar S; Yamasaki, Teruo; Beller, Grégory
2013-06-01
We present a cross-cultural study on the performance and perception of affective expression in music. Professional bowed-string musicians from different musical traditions (Swedish folk music, Hindustani classical music, Japanese traditional music, and Western classical music) were instructed to perform short pieces of music to convey 11 emotions and related states to listeners. All musical stimuli were judged by Swedish, Indian, and Japanese participants in a balanced design, and a variety of acoustic and musical cues were extracted. Results first showed that the musicians' expressive intentions could be recognized with accuracy above chance both within and across musical cultures, but communication was, in general, more accurate for culturally familiar versus unfamiliar music, and for basic emotions versus nonbasic affective states. We further used a lens-model approach to describe the relations between the strategies that musicians use to convey various expressions and listeners' perceptions of the affective content of the music. Many acoustic and musical cues were similarly correlated with both the musicians' expressive intentions and the listeners' affective judgments across musical cultures, but the match between musicians' and listeners' uses of cues was better in within-cultural versus cross-cultural conditions. We conclude that affective expression in music may depend on a combination of universal and culture-specific factors.
Perception of Sung Speech in Bimodal Cochlear Implant Users.
Crew, Joseph D; Galvin, John J; Fu, Qian-Jie
2016-11-11
Combined use of a hearing aid (HA) and cochlear implant (CI) has been shown to improve CI users' speech and music performance. However, different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefits. In this study, speech and music perception were measured in bimodal listeners for CI-only, HA-only, and CI + HA conditions, using the Sung Speech Corpus, a database of monosyllabic words produced at different fundamental frequencies. Sentence recognition was measured using sung speech in which pitch was held constant or varied across words, as well as for spoken speech. Melodic contour identification (MCI) was measured using sung speech in which the words were held constant or varied across notes. Results showed that sentence recognition was poorer with sung speech relative to spoken, with little difference between sung speech with a constant or variable pitch; mean performance was better with CI-only relative to HA-only, and best with CI + HA. MCI performance was better with constant words versus variable words; mean performance was better with HA-only than with CI-only and was best with CI + HA. Relative to CI-only, a strong bimodal benefit was observed for speech and music perception. Relative to the better ear, bimodal benefits remained strong for sentence recognition but were marginal for MCI. While variations in pitch and timbre may negatively affect CI users' speech and music perception, bimodal listening may partially compensate for these deficits. © The Author(s) 2016.
Bergstein, Moshe
2013-08-01
Wagner's Tristan und Isolde holds a central position in Western music and culture. It is shown to demonstrate consequences of interruption of developmental processes involving the need for recognition of subjectivity, resulting in the collapse of this need into the wish for annihilation of self and other through 'love-death' [Liebestod]. A close reading of the musical language of the opera reveals how this interruption is demonstrated, and the consequent location of identity outside of language, particularly suitable for expression in music. Isolde's dynamics are presented as distinct from that of Tristan, and in contrast to other interpretations of Tristan and Isolde's love as an attack on the Oedipal order, or as a regressive wish for pre-Oedipal union. Isolde's Act I narrative locates the origin of her desire in the protagonists' mutual gaze at a traumatic moment. In this moment powerful and contrasting emotions converge, evoking thwarted developmental needs, and arousing the fantasy of redemption in love-death. By removing the magical elements, Wagner enables a deeper understanding of the characters' positions in relation to each other, each with his or her own needs for recognition and traumatic experiences. These positions invite mutual identifications resulting in rising tension between affirmation of identity and annihilation, with actual death as the only possible psychic solution. The dynamics described in the opera demonstrate the function of music and opera in conveying meaning which is not verbally expressible. Copyright © 2013 Institute of Psychoanalysis.
Speech Perception with Music Maskers by Cochlear Implant Users and Normal-Hearing Listeners
ERIC Educational Resources Information Center
Eskridge, Elizabeth N.; Galvin, John J., III; Aronoff, Justin M.; Li, Tianhao; Fu, Qian-Jie
2012-01-01
Purpose: The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. Method: Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their…
Polyphonic Music Information Retrieval Based on Multi-Label Cascade Classification System
ERIC Educational Resources Information Center
Jiang, Wenxin
2009-01-01
Recognition and separation of sounds played by various instruments is very useful in labeling audio files with semantic information. This is a non-trivial task requiring sound analysis, but the results can aid automatic indexing and browsing music data when searching for melodies played by user specified instruments. Melody match based on pitch…
High-Level Event Recognition in Unconstrained Videos
2013-01-01
frames per- forms well for urban soundscapes but not for polyphonic music. In place of GMM, Lu et al. [78] adopted spectral clustering to generate...Aucouturier JJ, Defreville B, Pachet F (2007) The bag-of-frames approach to audio pattern recognition: a sufficientmodel for urban soundscapes but not
Challenging assumptions of notational transparency: the case of vectors in engineering mathematics
NASA Astrophysics Data System (ADS)
Craig, Tracy S.
2017-11-01
The notation for vector analysis has a contentious nineteenth century history, with many different notations describing the same or similar concepts competing for use. While the twentieth century has seen a great deal of unification in vector analysis notation, variation still remains. In this paper, the two primary notations used for expressing the components of a vector are discussed in historical and current context. Popular mathematical texts use the two notations as if they are transparent and interchangeable. In this research project, engineering students' proficiency at vector analysis was assessed and the data were analyzed using the Rasch measurement method. Results indicate that the students found items expressed in unit vector notation more difficult than those expressed in parenthesis notation. The expert experience of notation as transparent and unproblematically symbolic of underlying processes independent of notation is shown to contrast with the student experience where the less familiar notation is experienced as harder to work with.
Play it again, Sam: brain correlates of emotional music recognition.
Altenmüller, Eckart; Siggel, Susann; Mohammadi, Bahram; Samii, Amir; Münte, Thomas F
2014-01-01
Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory. Eighteen non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 s length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 s) was followed by a 20 s resting period during which participants pressed either the "old" or the "new" button to indicate whether they had heard the piece before. Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces. Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance and is recognized very fast.
Play it again, Sam: brain correlates of emotional music recognition
Altenmüller, Eckart; Siggel, Susann; Mohammadi, Bahram; Samii, Amir; Münte, Thomas F.
2014-01-01
Background: Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory. Methods: Eighteen non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 s length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 s) was followed by a 20 s resting period during which participants pressed either the “old” or the “new” button to indicate whether they had heard the piece before. Results: Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces. Conclusion: Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance and is recognized very fast. PMID:24634661
Perception of Leitmotives in Richard Wagner's Der Ring des Nibelungen.
Baker, David J; Müllensiefen, Daniel
2017-01-01
The music of Richard Wagner tends to generate very diverse judgments indicative of the complex relationship between listeners and the sophisticated musical structures in Wagner's music. This paper presents findings from two listening experiments using the music from Wagner's Der Ring des Nibelungen that explores musical as well as individual listener parameters to better understand how listeners are able to hear leitmotives, a compositional device closely associated with Wagner's music. Results confirm findings from a previous experiment showing that specific expertise with Wagner's music can account for a greater portion of the variance in an individual's ability to recognize and remember musical material compared to measures of generic musical training. Results also explore how acoustical distance of the leitmotives affects memory recognition using a chroma similarity measure. In addition, we show how characteristics of the compositional structure of the leitmotives contributes to their salience and memorability. A final model is then presented that accounts for the aforementioned individual differences factors, as well as parameters of musical surface and structure. Our results suggest that that future work in music perception may consider both individual differences variables beyond musical training, as well as symbolic features and audio commonly used in music information retrieval in order to build robust models of musical perception and cognition.
Perception of Leitmotives in Richard Wagner's Der Ring des Nibelungen
Baker, David J.; Müllensiefen, Daniel
2017-01-01
The music of Richard Wagner tends to generate very diverse judgments indicative of the complex relationship between listeners and the sophisticated musical structures in Wagner's music. This paper presents findings from two listening experiments using the music from Wagner's Der Ring des Nibelungen that explores musical as well as individual listener parameters to better understand how listeners are able to hear leitmotives, a compositional device closely associated with Wagner's music. Results confirm findings from a previous experiment showing that specific expertise with Wagner's music can account for a greater portion of the variance in an individual's ability to recognize and remember musical material compared to measures of generic musical training. Results also explore how acoustical distance of the leitmotives affects memory recognition using a chroma similarity measure. In addition, we show how characteristics of the compositional structure of the leitmotives contributes to their salience and memorability. A final model is then presented that accounts for the aforementioned individual differences factors, as well as parameters of musical surface and structure. Our results suggest that that future work in music perception may consider both individual differences variables beyond musical training, as well as symbolic features and audio commonly used in music information retrieval in order to build robust models of musical perception and cognition. PMID:28522981
Hearing the irrational: music and the development of the modern concept of number.
Pesic, Peter
2010-09-01
Because the modern concept of number emerged within a quadrivium that included music alongside arithmetic, geometry, and astronomy, musical considerations affected mathematical developments. Michael Stifel embedded the then-paradoxical term "irrational numbers" (numerici irrationales) in a musical context (1544), though his philosophical aversion to the "cloud of infinity" surrounding such numbers finally outweighed his musical arguments in their favor. Girolamo Cardano gave the same status to irrational and rational quantities in his algebra (1545), for which his contemporaneous work on music suggested parallels and empirical examples. Nicola Vicentino's attempt to revive ancient "enharmonic" music (1555) required and hence defended the use of "irrational proportions" (proportiones inrationales) as if they were numbers. These developments emerged in richly interactive social and cultural milieus whose participants interwove musical and mathematical interests so closely that their intense controversies about ancient Greek music had repercussions for mathematics as well. The musical interests of Stifel, Cardano, and Vicentino influenced their respective treatments of "irrational numbers." Practical as well as theoretical music both invited and opened the way for the recognition of a radically new concept of number, even in the teeth of paradox.
ERIC Educational Resources Information Center
Crooke, Alexander Hew Dale; McFerran, Katrina Skewes
2014-01-01
The potential for music programs to promote psychosocial wellbeing in mainstream schools is recognised in both policy and research literature. Despite this recognition, there is a dearth of consistent research evidence supporting this link. Authors attribute this lack of consistent evidence to limitations in the areas of research design and…
Telling in-tune from out-of-tune: widespread evidence for implicit absolute intonation.
Van Hedger, Stephen C; Heald, Shannon L M; Huang, Alex; Rutstein, Brooke; Nusbaum, Howard C
2017-04-01
Absolute pitch (AP) is the rare ability to name or produce an isolated musical note without the aid of a reference note. One skill thought to be unique to AP possessors is the ability to provide absolute intonation judgments (e.g., classifying an isolated note as "in-tune" or "out-of-tune"). Recent work has suggested that absolute intonation perception among AP possessors is not crystallized in a critical period of development, but is dynamically maintained by the listening environment, in which the vast majority of Western music is tuned to a specific cultural standard. Given that all listeners of Western music are constantly exposed to this specific cultural tuning standard, our experiments address whether absolute intonation perception extends beyond AP possessors. We demonstrate that non-AP listeners are able to accurately judge the intonation of completely isolated notes. Both musicians and nonmusicians showed evidence for absolute intonation recognition when listening to familiar timbres (piano and violin). When testing unfamiliar timbres (triangle and inverted sine waves), only musicians showed weak evidence of absolute intonation recognition (Experiment 2). Overall, these results highlight a previously unknown similarity between AP and non-AP possessors' long-term musical note representations, including evidence of sensitivity to frequency.
NASA Astrophysics Data System (ADS)
Srimani, P. K.; Parimala, Y. G.
2011-12-01
A unique approach has been developed to study patterns in ragas of Carnatic Classical music based on artificial neural networks. Ragas in Carnatic music which have found their roots in the Vedic period, have grown on a Scientific foundation over thousands of years. However owing to its vastness and complexities it has always been a challenge for scientists and musicologists to give an all encompassing perspective both qualitatively and quantitatively. Cognition, comprehension and perception of ragas in Indian classical music have always been the subject of intensive research, highly intriguing and many facets of these are hitherto not unravelled. This paper is an attempt to view the melakartha ragas with a cognitive perspective using artificial neural network based approach which has given raise to very interesting results. The 72 ragas of the melakartha system were defined through the combination of frequencies occurring in each of them. The data sets were trained using several neural networks. 100% accurate pattern recognition and classification was obtained using linear regression, TLRN, MLP and RBF networks. Performance of the different network topologies, by varying various network parameters, were compared. Linear regression was found to be the best performing network.
Naranjo, C; Kornreich, C; Campanella, S; Noël, X; Vandriette, Y; Gillain, B; de Longueville, X; Delatte, B; Verbanck, P; Constant, E
2011-02-01
The processing of emotional stimuli is thought to be negatively biased in major depression. This study investigates this issue using musical, vocal and facial affective stimuli. 23 depressed in-patients and 23 matched healthy controls were recruited. Affective information processing was assessed through musical, vocal and facial emotion recognition tasks. Depression, anxiety level and attention capacity were controlled. The depressed participants demonstrated less accurate identification of emotions than the control group in all three sorts of emotion-recognition tasks. The depressed group also gave higher intensity ratings than the controls when scoring negative emotions, and they were more likely to attribute negative emotions to neutral voices and faces. Our in-patient group might differ from the more general population of depressed adults. They were all taking anti-depressant medication, which may have had an influence on their emotional information processing. Major depression is associated with a general negative bias in the processing of emotional stimuli. Emotional processing impairment in depression is not confined to interpersonal stimuli (faces and voices), being also present in the ability to feel music accurately. © 2010 Elsevier B.V. All rights reserved.
The effects of timbre on melody recognition are mediated by familiarity
NASA Astrophysics Data System (ADS)
McAuley, J. Devin; Ayala, Chris
2002-11-01
Two experiments examined the role of timbre in music recognition. In both experiments, participants rated the familiarity of a set of novel and well-known musical excerpts during a study phase and then were given a surprise old/new recognition test after a retention interval. The recognition test was comprised of the target melodies and an equal number of distractors; participants were instructed to respond yes to the targets and no to the distractors. In experiment 1, the timbre of the melodies was held constant throughout the study and then either stayed the same or switched to a different instrument sound during the test. In experiment 2, timbre varied randomly from trial to trial between the same two instruments used in experiment 1, yielding target melodies that were either mismatched or matched in their timbre. Switching timbre between study and test in experiment 1 was found to hurt the recognition of the novel melodies, but not the familiar melodies. The mediating effect of familiarity was eliminated in experiment 2 when timbre varied randomly from trial to trial rather than remaining constant. Possible reasons for the difference between studies will be discussed.
Music-related reward responses predict episodic memory performance.
Ferreri, Laura; Rodriguez-Fornells, Antoni
2017-12-01
Music represents a special type of reward involving the recruitment of the mesolimbic dopaminergic system. According to recent theories on episodic memory formation, as dopamine strengthens the synaptic potentiation produced by learning, stimuli triggering dopamine release could result in long-term memory improvements. Here, we behaviourally test whether music-related reward responses could modulate episodic memory performance. Thirty participants rated (in terms of arousal, familiarity, emotional valence, and reward) and encoded unfamiliar classical music excerpts. Twenty-four hours later, their episodic memory was tested (old/new recognition and remember/know paradigm). Results revealed an influence of music-related reward responses on memory: excerpts rated as more rewarding were significantly better recognized and remembered. Furthermore, inter-individual differences in the ability to experience musical reward, measured through the Barcelona Music Reward Questionnaire, positively predicted memory performance. Taken together, these findings shed new light on the relationship between music, reward and memory, showing for the first time that music-driven reward responses are directly implicated in higher cognitive functions and can account for individual differences in memory performance.
Musical hallucinosis: case reports and possible neurobiological models.
Mocellin, Ramon; Walterfang, Mark; Velakoulis, Dennis
2008-04-01
The perception of music without a stimulus, or musical hallucination, is reported in both organic and psychiatric disorders. It is most frequently described in the elderly with associated hearing loss and accompanied by some degree of insight. In this setting it is often referred to as 'musical hallucinosis'. The aim of the authors was to present examples of this syndrome and review the current understanding of its neurobiological basis. We describe three cases of persons experiencing musical hallucinosis in the context of hearing deficits with varying degrees of associated central nervous system abnormalities. Putative neurobiological mechanisms, in particular those involving de-afferentation of a complex auditory recognition system by complete or partial deafness, are discussed in the light of current information from the literature. Musical hallucinosis can be experienced in those patients with hearing impairment and is phenomenologically distinct for hallucinations described in psychiatric disorders.
Towards automatic musical instrument timbre recognition
NASA Astrophysics Data System (ADS)
Park, Tae Hong
This dissertation is comprised of two parts---focus on issues concerning research and development of an artificial system for automatic musical instrument timbre recognition and musical compositions. The technical part of the essay includes a detailed record of developed and implemented algorithms for feature extraction and pattern recognition. A review of existing literature introducing historical aspects surrounding timbre research, problems associated with a number of timbre definitions, and highlights of selected research activities that have had significant impact in this field are also included. The developed timbre recognition system follows a bottom-up, data-driven model that includes a pre-processing module, feature extraction module, and a RBF/EBF (Radial/Elliptical Basis Function) neural network-based pattern recognition module. 829 monophonic samples from 12 instruments have been chosen from the Peter Siedlaczek library (Best Service) and other samples from the Internet and personal collections. Significant emphasis has been put on feature extraction development and testing to achieve robust and consistent feature vectors that are eventually passed to the neural network module. In order to avoid a garbage-in-garbage-out (GIGO) trap and improve generality, extra care was taken in designing and testing the developed algorithms using various dynamics, different playing techniques, and a variety of pitches for each instrument with inclusion of attack and steady-state portions of a signal. Most of the research and development was conducted in Matlab. The compositional part of the essay includes brief introductions to "A d'Ess Are ," "Aboji," "48 13 N, 16 20 O," and "pH-SQ." A general outline pertaining to the ideas and concepts behind the architectural designs of the pieces including formal structures, time structures, orchestration methods, and pitch structures are also presented.
EEG-based emotion recognition in music listening.
Lin, Yuan-Pin; Wang, Chi-Hong; Jung, Tzyy-Ping; Wu, Tien-Lin; Jeng, Shyh-Kang; Duann, Jeng-Ren; Chen, Jyh-Horng
2010-07-01
Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
Mendez, M F
2001-02-01
After a right temporoparietal stroke, a left-handed man lost the ability to understand speech and environmental sounds but developed greater appreciation for music. The patient had preserved reading and writing but poor verbal comprehension. Slower speech, single syllable words, and minimal written cues greatly facilitated his verbal comprehension. On identifying environmental sounds, he made predominant acoustic errors. Although he failed to name melodies, he could match, describe, and sing them. The patient had normal hearing except for presbyacusis, right-ear dominance for phonemes, and normal discrimination of basic psychoacoustic features and rhythm. Further testing disclosed difficulty distinguishing tone sequences and discriminating two clicks and short-versus-long tones, particularly in the left ear. Together, these findings suggest impairment in a direct route for temporal analysis and auditory word forms in his right hemisphere to Wernicke's area in his left hemisphere. The findings further suggest a separate and possibly rhythm-based mechanism for music recognition.
ERIC Educational Resources Information Center
Blom, Diana; Bennett, Dawn; Wright, David
2011-01-01
Artistic research output struggles for recognition as "legitimate" research within the highly-competitive and often traditional university sector. Often recognition requires the underpinning processes and thinking to be documented in a traditional written format. This article discusses the views of eight arts practitioners working in…
McLachlan, Neil M.; Marco, David J. T.; Wilson, Sarah J.
2013-01-01
Absolute pitch (AP) is a form of sound recognition in which musical note names are associated with discrete musical pitch categories. The accuracy of pitch matching by non-AP musicians for chords has recently been shown to depend on stimulus familiarity, pointing to a role of spectral recognition mechanisms in the early stages of pitch processing. Here we show that pitch matching accuracy by AP musicians was also dependent on their familiarity with the chord stimulus. This suggests that the pitch matching abilities of both AP and non-AP musicians for concurrently presented pitches are dependent on initial recognition of the chord. The dual mechanism model of pitch perception previously proposed by the authors suggests that spectral processing associated with sound recognition primes waveform processing to extract stimulus periodicity and refine pitch perception. The findings presented in this paper are consistent with the dual mechanism model of pitch, and in the case of AP musicians, the formation of nominal pitch categories based on both spectral and periodicity information. PMID:24961624
Escalda, Júlia; Lemos, Stela Maris Aguiar; França, Cecília Cavalieri
2011-09-01
To investigate the relations between musical experience, auditory processing and phonological awareness of groups of 5-year-old children with and without musical experience. Participants were 56 5-year-old subjects of both genders, 26 in the Study Group, consisting of children with musical experience, and 30 in the Control Group, consisting of children without musical experience. All participants were assessed with the Simplified Auditory Processing Assessment and Phonological Awareness Test and the data was statistically analyzed. There was a statistically significant difference between the results of the sequential memory test for verbal and non-verbal sounds with four stimuli, phonological awareness tasks of rhyme recognition, phonemic synthesis and phonemic deletion. Analysis of multiple binary logistic regression showed that, with exception of the sequential verbal memory with four syllables, the observed difference in subjects' performance was associated with their musical experience. Musical experience improves auditory and metalinguistic abilities of 5-year-old children.
Hopfield's Model of Patterns Recognition and Laws of Artistic Perception
NASA Astrophysics Data System (ADS)
Yevin, Igor; Koblyakov, Alexander
The model of patterns recognition or attractor network model of associative memory, offered by J.Hopfield 1982, is the most known model in theoretical neuroscience. This paper aims to show, that such well-known laws of art perception as the Wundt curve, perception of visual ambiguity in art, and also the model perception of musical tonalities are nothing else than special cases of the Hopfield’s model of patterns recognition.
Baird, Amee; Samson, Séverine; Miller, Laurie; Chalmers, Kerry
2017-02-01
The efficacy of using sung words as a mnemonic device for verbal memory has been documented in persons with probable Alzheimer's dementia (AD), but it is not yet known whether this effect is related to music training. Given that music training can enhance cognitive functioning, we explored the effects of music training and modality (sung vs. spoken) on verbal memory in persons with and without AD. We used a mixed factorial design to compare learning (5 trials), delayed recall (30-min and, 24-hour), and recognition of sung versus spoken information in 22 healthy elderly (15 musicians), and 11 people with AD (5 musicians). Musicians with AD showed better total learning (over 5 trials) of sung information than nonmusicians with AD. There were no significant differences in delayed recall and recognition accuracy (of either modality) between musicians with and without AD, suggesting that music training may facilitate memory function in AD. Analysis of individual performances showed that two of the five musicians with AD were able to recall some information on delayed recall, whereas the nonmusicians with AD recalled no information on delay. The only significant finding in regard to modality (sung vs. spoken) was that total learning was significantly worse for sung than spoken information for nonmusicians with AD. This may be due to the need to recode information presented in song into spoken recall, which may be more cognitively demanding for this group. This is the first study to demonstrate that music training modulates memory of sung and spoken information in AD. The mechanism underlying these results is unclear, but may be due to music training, higher cognitive abilities, or both. Our findings highlight the need for further research into the potentially protective effect of music training on cognitive abilities in our aging population.
A Description of the Use of Music Therapy in Consultation-Liaison Psychiatry
Ries, Rose
2007-01-01
Music therapy is gaining increasing recognition for its benefit in medical settings both for its salutary effects on physiological parameters and on psychological states associated with medical illness. This article discusses the role of a music therapist in consultation-liaison psychiatry, a specialty that provides intervention for medical and surgical patients with concomitant mental health issues. We describe the ways in which music therapy has been integrated into the consultation-liaison psychiatry service at Hahnemann University Hospital, a tertiary care facility and major trauma center in Philadelphia. The referral process and some of the techniques used in music therapy are explained. Anecdotal observations illustrate how a music therapist incorporates the various elements of music as well as the experiences of engaging in music-making to bring about changes in mood and facilitate expression of feelings and social interactions in patients who are having difficulty coping with the effects of illness and hospitalization. These methods have also been observed to have positive effects on the hospital staff by making available a means with which staff can express pressures inherent in direct patient care. PMID:20805929
Representing object oriented specifications and designs with extended data flow notations
NASA Technical Reports Server (NTRS)
Buser, Jon Franklin; Ward, Paul T.
1988-01-01
The issue of using extended data flow notations to document object oriented designs and specifications is discussed. Extended data flow notations, for the purposes here, refer to notations that are based on the rules of Yourdon/DeMarco data flow analysis. The extensions include additional notation for representing real-time systems as well as some proposed extensions specific to object oriented development. Some advantages of data flow notations are stated. How data flow diagrams are used to represent software objects are investigated. Some problem areas with regard to using data flow notations for object oriented development are noted. Some initial solutions to these problems are proposed.
A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification
Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong
2016-01-01
Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs). PMID:26985826
A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.
Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong
2016-01-01
Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).
Melodic contour identification and sentence recognition using sung speech
Crew, Joseph D.; Galvin, John J.; Fu, Qian-Jie
2015-01-01
For bimodal cochlear implant users, acoustic and electric hearing has been shown to contribute differently to speech and music perception. However, differences in test paradigms and stimuli in speech and music testing can make it difficult to assess the relative contributions of each device. To address these concerns, the Sung Speech Corpus (SSC) was created. The SSC contains 50 monosyllable words sung over an octave range and can be used to test both speech and music perception using the same stimuli. Here SSC data are presented with normal hearing listeners and any advantage of musicianship is examined. PMID:26428838
Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications
2015-03-01
which contains about 60 interfering speakers as well as background music in a bar. The top panel is again clean training /noisy testing settings, and...recognition system for Mandarin was developed and tested. Character recognition rates as high as 88% were obtained, using an approximately 40 training ...Tool_ComputeFeat.m) .............................................................................................................. 50 6.3. Training
ERIC Educational Resources Information Center
Giordano, Geoff
2009-01-01
SchoolJam, a popular teen musicians' showcase in Texas that provides recognition for young performers as well as funding for their school music programs, is about to go nationwide. The competition, which NAMM, the International Music Products Association, brought to the United States from Germany in 2007, allows groups of musicians age 13 to 18 to…
Semantic and episodic memory of music are subserved by distinct neural networks.
Platel, Hervé; Baron, Jean-Claude; Desgranges, Béatrice; Bernard, Frédéric; Eustache, Francis
2003-09-01
Numerous functional imaging studies have shown that retrieval from semantic and episodic memory is subserved by distinct neural networks. However, these results were essentially obtained with verbal and visuospatial material. The aim of this work was to determine the neural substrates underlying the semantic and episodic components of music using familiar and nonfamiliar melodic tunes. To study musical semantic memory, we designed a task in which the instruction was to judge whether or not the musical extract was felt as "familiar." To study musical episodic memory, we constructed two delayed recognition tasks, one containing only familiar and the other only nonfamiliar items. For each recognition task, half of the extracts (targets) were presented in the prior semantic task. The episodic and semantic tasks were to be contrasted by a comparison to two perceptive control tasks and to one another. Cerebral blood flow was assessed by means of the oxygen-15-labeled water injection method, using high-resolution PET. Distinct patterns of activations were found. First, regarding the episodic memory condition, bilateral activations of the middle and superior frontal gyri and precuneus (more prominent on the right side) were observed. Second, the semantic memory condition disclosed extensive activations in the medial and orbital frontal cortex bilaterally, the left angular gyrus, and predominantly the left anterior part of the middle temporal gyri. The findings from this study are discussed in light of the available neuropsychological data obtained in brain-damaged subjects and functional neuroimaging studies.
Zhou, Linshu; Liu, Fang; Jing, Xiaoyi; Jiang, Cunmei
2017-02-01
Music is a unique communication system for human beings. Iconic musical meaning is one dimension of musical meaning, which emerges from musical information resembling sounds of objects, qualities of objects, or qualities of abstract concepts. The present study investigated whether congenital amusia, a disorder of musical pitch perception, impacts the processing of iconic musical meaning. With a cross-modal semantic priming paradigm, target images were primed by semantically congruent or incongruent musical excerpts, which were characterized by direction (upward or downward) of pitch change (Experiment 1), or were selected from natural music (Experiment 2). Twelve Mandarin-speaking amusics and 12 controls performed a recognition (implicit) and a semantic congruency judgment (explicit) task while their EEG waveforms were recorded. Unlike controls, amusics failed to elicit an N400 effect when musical meaning was represented by direction of pitch change, regardless of the nature of the tasks (implicit versus explicit). However, the N400 effect in response to musical meaning in natural musical excerpts was observed for both the groups in both types of tasks. These results indicate that amusics are able to process iconic musical meaning through multiple acoustic cues in natural musical excerpts, but not through the direction of pitch change. This is the first study to investigate the processing of musical meaning in congenital amusia, providing evidence in support of the "melodic contour deafness hypothesis" with regard to iconic musical meaning processing in this disorder. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics.
Brattico, Elvira; Alluri, Vinoo; Bogert, Brigitte; Jacobsen, Thomas; Vartiainen, Nuutti; Nieminen, Sirke; Tervaniemi, Mari
2011-01-01
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants' self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects' selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca's area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics
Brattico, Elvira; Alluri, Vinoo; Bogert, Brigitte; Jacobsen, Thomas; Vartiainen, Nuutti; Nieminen, Sirke; Tervaniemi, Mari
2011-01-01
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions. PMID:22144968
Relaxing music counters heightened consolidation of emotional memory.
Rickard, Nikki S; Wong, Wendy Wing; Velik, Lauren
2012-02-01
Emotional events tend to be retained more strongly than other everyday occurrences, a phenomenon partially regulated by the neuromodulatory effects of arousal. Two experiments demonstrated the use of relaxing music as a means of reducing arousal levels, thereby challenging heightened long-term recall of an emotional story. In Experiment 1, participants (N=84) viewed a slideshow, during which they listened to either an emotional or neutral narration, and were exposed to relaxing or no music. Retention was tested 1 week later via a forced choice recognition test. Retention for both the emotional content (Phase 2 of the story) and material presented immediately after the emotional content (Phase 3) was enhanced, when compared with retention for the neutral story. Relaxing music prevented the enhancement for material presented after the emotional content (Phase 3). Experiment 2 (N=159) provided further support to the neuromodulatory effect of music by post-event presentation of both relaxing music and non-relaxing auditory stimuli (arousing music/background sound). Free recall of the story was assessed immediately afterwards and 1 week later. Relaxing music significantly reduced recall of the emotional story (Phase 2). The findings provide further insight into the capacity of relaxing music to attenuate the strength of emotional memory, offering support for the therapeutic use of music for such purposes. Copyright © 2011 Elsevier Inc. All rights reserved.
Sallat, Stephan; Jentschke, Sebastian
2015-01-01
Language and music share many properties, with a particularly strong overlap for prosody. Prosodic cues are generally regarded as crucial for language acquisition. Previous research has indicated that children with SLI fail to make use of these cues. As processing of prosodic information involves similar skills to those required in music perception, we compared music perception skills (melodic and rhythmic-melodic perception and melody recognition) in a group of children with SLI (N = 29, five-year-olds) to two groups of controls, either of comparable age (N = 39, five-year-olds) or of age closer to the children with SLI in their language skills and about one year younger (N = 13, four-year-olds). Children with SLI performed in most tasks below their age level, closer matching the performance level of younger controls with similar language skills. These data strengthen the view of a strong relation between language acquisition and music processing. This might open a perspective for the possible use of musical material in early diagnosis of SLI and of music in SLI therapy.
Distributed digital music archives and libraries
NASA Astrophysics Data System (ADS)
Fujinaga, Ichiro
2005-09-01
The main goal of this research program is to develop and evaluate practices, frameworks, and tools for the design and construction of worldwide distributed digital music archives and libraries. Over the last few millennia, humans have amassed an enormous amount of musical information that is scattered around the world. It is becoming abundantly clear that the optimal path for acquisition is to distribute the task of digitizing the wealth of historical and cultural heritage material that exists in analogue formats, which may include books and manuscripts related to music, music scores, photographs, videos, audio tapes, and phonograph records. In order to achieve this goal, libraries, museums, and archives throughout the world, large or small, need well-researched policies, proper guidance, and efficient tools to digitize their collections and to make them available economically. The research conducted within the program addresses unique and imminent challenges posed by the digitization and dissemination of music media. The are four major research projects in progress: development and evaluation of digitization methods for preservation of analogue recordings; optical music recognition using microfilms; design of workflow management system with automatic metadata extraction; and formulation of interlibrary communication strategies.
Sallat, Stephan; Jentschke, Sebastian
2015-01-01
Language and music share many properties, with a particularly strong overlap for prosody. Prosodic cues are generally regarded as crucial for language acquisition. Previous research has indicated that children with SLI fail to make use of these cues. As processing of prosodic information involves similar skills to those required in music perception, we compared music perception skills (melodic and rhythmic-melodic perception and melody recognition) in a group of children with SLI (N = 29, five-year-olds) to two groups of controls, either of comparable age (N = 39, five-year-olds) or of age closer to the children with SLI in their language skills and about one year younger (N = 13, four-year-olds). Children with SLI performed in most tasks below their age level, closer matching the performance level of younger controls with similar language skills. These data strengthen the view of a strong relation between language acquisition and music processing. This might open a perspective for the possible use of musical material in early diagnosis of SLI and of music in SLI therapy. PMID:26508812
Collaborative Recurrent Neural Networks forDynamic Recommender Systems
2016-11-22
formulation leads to an efficient and practical method. Furthermore, we demonstrate the versatility of our model by applying it to two different tasks: music ...form (user id, location id, check-in time). The LastFM9 dataset consists of sequences of songs played by a user’s music player collected by using a...Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2), 1990. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition
Rainsford, M; Palmer, M A; Paine, G
2018-04-01
Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.
Music viewed by its entropy content: A novel window for comparative analysis
Febres, Gerardo; Jaffe, Klaus
2017-01-01
Polyphonic music files were analyzed using the set of symbols that produced the Minimal Entropy Description, which we call the Fundamental Scale. This allowed us to create a novel space to represent music pieces by developing: (a) a method to adjust a textual description from its original scale of observation to an arbitrarily selected scale, (b) a method to model the structure of any textual description based on the shape of the symbol frequency profiles, and (c) the concept of higher order entropy as the entropy associated with the deviations of a frequency-ranked symbol profile from a perfect Zipfian profile. We call this diversity index the ‘2nd Order Entropy’. Applying these methods to a variety of musical pieces showed how the space of ‘symbolic specific diversity-entropy’ and that of ‘2nd order entropy’ captures characteristics that are unique to each music type, style, composer and genre. Some clustering of these properties around each musical category is shown. These methods allow us to visualize a historic trajectory of academic music across this space, from medieval to contemporary academic music. We show that the description of musical structures using entropy, symbol frequency profiles and specific symbolic diversity allows us to characterize traditional and popular expressions of music. These classification techniques promise to be useful in other disciplines for pattern recognition and machine learning. PMID:29040288
Music improves verbal memory encoding while decreasing prefrontal cortex activity: an fNIRS study.
Ferreri, Laura; Aucouturier, Jean-Julien; Muthalib, Makii; Bigand, Emmanuel; Bugaiska, Aurelia
2013-01-01
Listening to music engages the whole brain, thus stimulating cognitive performance in a range of non-purely musical activities such as language and memory tasks. This article addresses an ongoing debate on the link between music and memory for words. While evidence on healthy and clinical populations suggests that music listening can improve verbal memory in a variety of situations, it is still unclear what specific memory process is affected and how. This study was designed to explore the hypothesis that music specifically benefits the encoding part of verbal memory tasks, by providing a richer context for encoding and therefore less demand on the dorsolateral prefrontal cortex (DLPFC). Twenty-two healthy young adults were subjected to functional near-infrared spectroscopy (fNIRS) imaging of their bilateral DLPFC while encoding words in the presence of either a music or a silent background. Behavioral data confirmed the facilitating effect of music background during encoding on subsequent item recognition. fNIRS results revealed significantly greater activation of the left hemisphere during encoding (in line with the HERA model of memory lateralization) and a sustained, bilateral decrease of activity in the DLPFC in the music condition compared to silence. These findings suggest that music modulates the role played by the DLPFC during verbal encoding, and open perspectives for applications to clinical populations with prefrontal impairments, such as elderly adults or Alzheimer's patients.
Music viewed by its entropy content: A novel window for comparative analysis.
Febres, Gerardo; Jaffe, Klaus
2017-01-01
Polyphonic music files were analyzed using the set of symbols that produced the Minimal Entropy Description, which we call the Fundamental Scale. This allowed us to create a novel space to represent music pieces by developing: (a) a method to adjust a textual description from its original scale of observation to an arbitrarily selected scale, (b) a method to model the structure of any textual description based on the shape of the symbol frequency profiles, and (c) the concept of higher order entropy as the entropy associated with the deviations of a frequency-ranked symbol profile from a perfect Zipfian profile. We call this diversity index the '2nd Order Entropy'. Applying these methods to a variety of musical pieces showed how the space of 'symbolic specific diversity-entropy' and that of '2nd order entropy' captures characteristics that are unique to each music type, style, composer and genre. Some clustering of these properties around each musical category is shown. These methods allow us to visualize a historic trajectory of academic music across this space, from medieval to contemporary academic music. We show that the description of musical structures using entropy, symbol frequency profiles and specific symbolic diversity allows us to characterize traditional and popular expressions of music. These classification techniques promise to be useful in other disciplines for pattern recognition and machine learning.
Hutter, E; Argstatter, H; Grapp, M; Plinkert, P K
2015-09-01
Although cochlear implant (CI) users achieve good speech comprehension, they experience difficulty perceiving music and prosody in speech. As the provision of music training in rehabilitation is limited, a novel concept of music therapy for rehabilitation of adult CI users was developed and evaluated in this pilot study. Twelve unilaterally implanted, postlingually deafened CI users attended ten sessions of individualized and standardized training. The training started about 6 weeks after the initial activation of the speech processor. Before and after therapy, psychological and musical tests were applied in order to evaluate the effects of music therapy. CI users completed the musical tests in two conditions: bilateral (CI + contralateral, unimplanted ear) and unilateral (CI only). After therapy, improvements were observed in the subjective sound quality (Hearing Implant Sound Quality Index) and the global score on the self-concept questionnaire (Multidimensional Self-Concept Scales) as well as in the musical subtests for melody recognition and for timbre identification in the unilateral condition. Discussion Preliminary results suggest improvements in subjective hearing and music perception, with an additional increase in global self-concept and enhanced daily listening capacities. The novel concept of individualized music therapy seems to provide an effective treatment option in the rehabilitation of adult CI users. Further investigations are necessary to evaluate effects in the area of prosody perception and to separate therapy effects from general learning effects in CI rehabilitation.
Music improves verbal memory encoding while decreasing prefrontal cortex activity: an fNIRS study
Ferreri, Laura; Aucouturier, Jean-Julien; Muthalib, Makii; Bigand, Emmanuel; Bugaiska, Aurelia
2013-01-01
Listening to music engages the whole brain, thus stimulating cognitive performance in a range of non-purely musical activities such as language and memory tasks. This article addresses an ongoing debate on the link between music and memory for words. While evidence on healthy and clinical populations suggests that music listening can improve verbal memory in a variety of situations, it is still unclear what specific memory process is affected and how. This study was designed to explore the hypothesis that music specifically benefits the encoding part of verbal memory tasks, by providing a richer context for encoding and therefore less demand on the dorsolateral prefrontal cortex (DLPFC). Twenty-two healthy young adults were subjected to functional near-infrared spectroscopy (fNIRS) imaging of their bilateral DLPFC while encoding words in the presence of either a music or a silent background. Behavioral data confirmed the facilitating effect of music background during encoding on subsequent item recognition. fNIRS results revealed significantly greater activation of the left hemisphere during encoding (in line with the HERA model of memory lateralization) and a sustained, bilateral decrease of activity in the DLPFC in the music condition compared to silence. These findings suggest that music modulates the role played by the DLPFC during verbal encoding, and open perspectives for applications to clinical populations with prefrontal impairments, such as elderly adults or Alzheimer’s patients. PMID:24339807
Zhang, Juan; Meng, Yaxuan; Wu, Chenggang; Zhou, Danny Q
2017-01-01
Music and language share many attributes and a large body of evidence shows that sensitivity to acoustic cues in music is positively related to language development and even subsequent reading acquisition. However, such association was mainly found in alphabetic languages. What remains unclear is whether sensitivity to acoustic cues in music is associated with reading in Chinese, a morphosyllabic language. The present study aimed to answer this question by measuring music (i.e., musical metric perception and pitch discrimination), language (i.e., phonological awareness, lexical tone sensitivity), and reading abilities (i.e., word recognition) among 54 third-grade Chinese-English bilingual children. After controlling for age and non-verbal intelligence, we found that both musical metric perception and pitch discrimination accounted for unique variance of Chinese phonological awareness while pitch discrimination rather than musical metric perception predicted Chinese lexical tone sensitivity. More importantly, neither musical metric perception nor pitch discrimination was associated with Chinese reading. As for English, musical metric perception and pitch discrimination were correlated with both English phonological awareness and English reading. Furthermore, sensitivity to acoustic cues in music was associated with English reading through the mediation of English phonological awareness. The current findings indicate that the association between sensitivity to acoustic cues in music and reading may be modulated by writing systems. In Chinese, the mapping between orthography and phonology is not as transparent as in alphabetic languages such as English. Thus, this opaque mapping may alter the auditory perceptual sensitivity in music to Chinese reading.
Zhang, Juan; Meng, Yaxuan; Wu, Chenggang; Zhou, Danny Q.
2017-01-01
Music and language share many attributes and a large body of evidence shows that sensitivity to acoustic cues in music is positively related to language development and even subsequent reading acquisition. However, such association was mainly found in alphabetic languages. What remains unclear is whether sensitivity to acoustic cues in music is associated with reading in Chinese, a morphosyllabic language. The present study aimed to answer this question by measuring music (i.e., musical metric perception and pitch discrimination), language (i.e., phonological awareness, lexical tone sensitivity), and reading abilities (i.e., word recognition) among 54 third-grade Chinese–English bilingual children. After controlling for age and non-verbal intelligence, we found that both musical metric perception and pitch discrimination accounted for unique variance of Chinese phonological awareness while pitch discrimination rather than musical metric perception predicted Chinese lexical tone sensitivity. More importantly, neither musical metric perception nor pitch discrimination was associated with Chinese reading. As for English, musical metric perception and pitch discrimination were correlated with both English phonological awareness and English reading. Furthermore, sensitivity to acoustic cues in music was associated with English reading through the mediation of English phonological awareness. The current findings indicate that the association between sensitivity to acoustic cues in music and reading may be modulated by writing systems. In Chinese, the mapping between orthography and phonology is not as transparent as in alphabetic languages such as English. Thus, this opaque mapping may alter the auditory perceptual sensitivity in music to Chinese reading. PMID:29170647
Evaluation protocol for amusia: Portuguese sample.
Peixoto, Maria Conceição; Martins, Jorge; Teixeira, Pedro; Alves, Marisa; Bastos, José; Ribeiro, Carlos
2012-12-01
Amusia is a disorder that affects the processing of music. Part of this processing happens in the primary auditory cortex. The study of this condition allows us to evaluate the central auditory pathways. To explore the diagnostic evaluation tests of amusia. The authors propose an evaluation protocol for patients with suspected amusia (after brain injury or complaints of poor musical perception), in parallel with the assessment of central auditory processing, already implemented in the department. The Montreal Evaluation of Battery of amusia was the basis for the selection of the tests. From this comprehensive battery of tests we selected some of the musical examples to evaluate different musical aspects, including memory and perception of music, ability concerning musical recognition and discrimination. In terms of memory there is a test for assessing delayed memory, adapted to the Portuguese culture. Prospective study. Although still experimental, with the possibility of adjustments in the assessment, we believe that this assessment, combined with the study of central auditory processing, will allow us to understand some central lesions, congenital or acquired hearing perception limitations.
Characterizing Listener Engagement with Popular Songs Using Large-Scale Music Discovery Data
Kaneshiro, Blair; Ruan, Feng; Baker, Casey W.; Berger, Jonathan
2017-01-01
Music discovery in everyday situations has been facilitated in recent years by audio content recognition services such as Shazam. The widespread use of such services has produced a wealth of user data, specifying where and when a global audience takes action to learn more about music playing around them. Here, we analyze a large collection of Shazam queries of popular songs to study the relationship between the timing of queries and corresponding musical content. Our results reveal that the distribution of queries varies over the course of a song, and that salient musical events drive an increase in queries during a song. Furthermore, we find that the distribution of queries at the time of a song's release differs from the distribution following a song's peak and subsequent decline in popularity, possibly reflecting an evolution of user intent over the “life cycle” of a song. Finally, we derive insights into the data size needed to achieve consistent query distributions for individual songs. The combined findings of this study suggest that music discovery behavior, and other facets of the human experience of music, can be studied quantitatively using large-scale industrial data. PMID:28386241
Characterizing Listener Engagement with Popular Songs Using Large-Scale Music Discovery Data.
Kaneshiro, Blair; Ruan, Feng; Baker, Casey W; Berger, Jonathan
2017-01-01
Music discovery in everyday situations has been facilitated in recent years by audio content recognition services such as Shazam. The widespread use of such services has produced a wealth of user data, specifying where and when a global audience takes action to learn more about music playing around them. Here, we analyze a large collection of Shazam queries of popular songs to study the relationship between the timing of queries and corresponding musical content. Our results reveal that the distribution of queries varies over the course of a song, and that salient musical events drive an increase in queries during a song. Furthermore, we find that the distribution of queries at the time of a song's release differs from the distribution following a song's peak and subsequent decline in popularity, possibly reflecting an evolution of user intent over the "life cycle" of a song. Finally, we derive insights into the data size needed to achieve consistent query distributions for individual songs. The combined findings of this study suggest that music discovery behavior, and other facets of the human experience of music, can be studied quantitatively using large-scale industrial data.
How Listening to Music Affects Reading: Evidence From Eye Tracking.
Zhang, Han; Miller, Kevin; Cleveland, Raymond; Cortina, Kai
2018-02-01
The current research looked at how listening to music affects eye movements when college students read natural passages for comprehension. Two studies found that effects of music depend on both frequency of the word and dynamics of the music. Study 1 showed that lexical and linguistic features of the text remained highly robust predictors of looking times, even in the music condition. However, under music exposure, (a) readers produced more rereading, and (b) gaze duration on words with very low frequency were less predicted by word length, suggesting disrupted sublexical processing. Study 2 showed that these effects were exacerbated for a short period as soon as a new song came into play. Our results suggested that word recognition generally stayed on track despite music exposure and that extensive rereading can, to some extent, compensate for disruption. However, an irrelevant auditory signal may impair sublexical processing of low-frequency words during first-pass reading, especially when the auditory signal changes dramatically. These eye movement patterns are different from those observed in some other scenarios in which reading comprehension is impaired, including mindless reading. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-27
... notation is used for substances identified as causing or contributing to allergic contact dermatitis (ACD... CONTACT: G. Scott Dotson, NIOSH, Robert A Taft Laboratories, MS-C32, 4676 Columbia Parkway, Cincinnati, OH... chemical contact with the skin. This strategy involves the assignment of multiple skin notations for...
ERIC Educational Resources Information Center
Khosh-khui, Abolghasem
This study investigates the degree of relationship between scientific and technical subject headings and their corresponding class notations in the Dewey Decimal (DDC) and Library of Congress Classification (LCC) systems. The degree of association between a subject heading and its corresponding class of notation or notations is measured by…
ERIC Educational Resources Information Center
Heiland, Teresa L.
2015-01-01
Four undergraduate dance majors learned Motif Notation and Labanotation using a second-language acquisition, playful, constructivist approach to learning notation literacy in order to learn and dance the "Parsons Etude." Qualitative outcomes were gathered from student journals and pre- and post-tests that assessed for levels of improved…
Sound Foundations: Organic Approaches to Learning Notation in Beginning Band
ERIC Educational Resources Information Center
West, Chad
2016-01-01
By starting with a foundation of sound before sight, we can help our students learn notation organically in a way that honors the natural process. This article describes five organic approaches to learning notation in beginning band: (1) iconic notation, (2) point and play, (3) student lead-sheet, (4) modeling, and (5) kid dictation. While…
Giordano, Bruno L; Egermann, Hauke; Bresin, Roberto
2014-01-01
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.
The cognitive processing of film and musical soundtracks.
Boltz, Marilyn G
2004-10-01
Previous research has demonstrated that musical soundtracks can influence the interpretation, emotional impact, and remembering of film information. The intent here was to examine how music is encoded into the cognitive system and subsequently represented relative to its accompanying visual action. In Experiment 1, participants viewed a set of music/film clips that were either congruent or incongruent in their emotional affects. Selective attending was also systematically manipulated by instructing viewers to attend to and remember the music, film, or both in tandem. The results from tune recognition, film recall, and paired discrimination tasks collectively revealed that mood-congruent pairs lead to a joint encoding of music/film information as well as an integrated memory code. Incongruent pairs, on the other hand, result in an independent encoding in which a given dimension, music or film, is only remembered well if it was selectively attended to at the time of encoding. Experiment 2 extended these findings by showing that tunes from mood-congruent pairs are better recognized when cued by their original scenes, while those from incongruent pairs are better remembered in the absence of scene information. These findings both support and extend the "Congruence Associationist Model" (A. J. Cohen, 2001), which addresses those cognitive mechanisms involved in the processing of music/film information.
Music: a unique window into the world of autism.
Molnar-Szakacs, Istvan; Heaton, Pamela
2012-04-01
Understanding emotions is fundamental to our ability to navigate the complex world of human social interaction. Individuals with autism spectrum disorders (ASD) experience difficulties with the communication and understanding of emotions within the social domain. Their ability to interpret other people's nonverbal, facial, and bodily expressions of emotion is strongly curtailed. However, there is evidence to suggest that many individuals with ASD show a strong and early preference for music and are able to understand simple and complex musical emotions in childhood and adulthood. The dissociation between emotion recognition abilities in musical and social domains in individuals with ASD provides us with the opportunity to consider the nature of emotion processing difficulties characterizing this disorder. There has recently been a surge of interest in musical abilities in individuals with ASD, and this has motivated new behavioral and neuroimaging studies. Here, we review this new work. We conclude by providing some questions for future directions. © 2012 New York Academy of Sciences.
1990-03-01
are linked together so that a user can easily move from one to 5 another." ([Ref. 2], Doc.#1522) Music , audio and other signals can be added to the...videodisc player, starting a video presentation, complete with music , highlighting the benefits of hyper.aedia to the company’s information needs...a Entertainment ; o Travel; & Multi-language applications; o Real estate; 7 " Retail kiosks and information booths; " Landscaping, design and
ERIC Educational Resources Information Center
Sims, Wendy L.
1986-01-01
Small-group listening lessons and subsequent individual posttests were used to judge 94 three- through five-year-old subjects' attention, paired-comparison piece preference, time spent listening, and piece recognition. Research procedures included a modified multiple baseline design and split-screen video taping of instructional sessions.…
Whipple, Christina M.; Gfeller, Kate; Driscoll, Virginia; Oleson, Jacob; McGregor, Karla
2014-01-01
Background Effective musical communication requires conveyance of the intended message in a manner perceptible to the receiver. Communication disorders that impair transmitting or decoding of structural features of music (e.g., pitch, timbre) and/or symbolic representation may result in atypical musical communication, which can have a negative impact on music therapy interventions. Objective This study compared recognition of symbolic representation of emotions or movements in music by two groups of children with different communicative characteristics: severe to profound hearing loss (using cochlear implants [CI]) and autism spectrum disorder (ASD). Their responses were compared to those of children with typical-development and normal hearing (TD-NH). Accuracy was examined as a function of communicative status, emotional or movement category, and individual characteristics. Methods Participants listened to recorded musical excerpts conveying emotions or movements and matched them with labels. Measures relevant to auditory and/or language function were also gathered. Results There was no significant difference between the ASD and TD-NH groups in identification of musical emotions or movements. However, the CI group was significantly less accurate than the other two groups in identification of both emotions and movements. Mixed effects logistic regression revealed different patterns of accuracy for specific emotions as a function of group. Conclusion Conveyance of emotions or movements through music may be decoded differently by persons with different types of communication disorders. Because music is the primary therapeutic tool in music therapy sessions, clinicians should consider these differential abilities when selecting music for clinical interventions focusing on emotions or movement. PMID:25691513
Emotion effects on implicit and explicit musical memory in normal aging.
Narme, Pauline; Peretz, Isabelle; Strub, Marie-Laure; Ergis, Anne-Marie
2016-12-01
Normal aging affects explicit memory while leaving implicit memory relatively spared. Normal aging also modifies how emotions are processed and experienced, with increasing evidence that older adults (OAs) focus more on positive information than younger adults (YAs). The aim of the present study was to investigate how age-related changes in emotion processing influence explicit and implicit memory. We used emotional melodies that differed in terms of valence (positive or negative) and arousal (high or low). Implicit memory was assessed with a preference task exploiting exposure effects, and explicit memory with a recognition task. Results indicated that effects of valence and arousal interacted to modulate both implicit and explicit memory in YAs. In OAs, recognition was poorer than in YAs; however, recognition of positive and high-arousal (happy) studied melodies was comparable. Insofar as socioemotional selectivity theory (SST) predicts a preservation of the recognition of positive information, our findings are not fully consistent with the extension of this theory to positive melodies since recognition of low-arousal (peaceful) studied melodies was poorer in OAs. In the preference task, YAs showed stronger exposure effects than OAs, suggesting an age-related decline of implicit memory. This impairment is smaller than the one observed for explicit memory (recognition), extending to the musical domain the dissociation between explicit memory decline and implicit memory relative preservation in aging. Finally, the disproportionate preference for positive material seen in OAs did not translate into stronger exposure effects for positive material suggesting no age-related emotional bias in implicit memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bogert, Brigitte; Numminen-Kontti, Taru; Gold, Benjamin; Sams, Mikko; Numminen, Jussi; Burunat, Iballa; Lampinen, Jouko; Brattico, Elvira
2016-08-01
Music is often used to regulate emotions and mood. Typically, music conveys and induces emotions even when one does not attend to them. Studies on the neural substrates of musical emotions have, however, only examined brain activity when subjects have focused on the emotional content of the music. Here we address with functional magnetic resonance imaging (fMRI) the neural processing of happy, sad, and fearful music with a paradigm in which 56 subjects were instructed to either classify the emotions (explicit condition) or pay attention to the number of instruments playing (implicit condition) in 4-s music clips. In the implicit vs. explicit condition, stimuli activated bilaterally the inferior parietal lobule, premotor cortex, caudate, and ventromedial frontal areas. The cortical dorsomedial prefrontal and occipital areas activated during explicit processing were those previously shown to be associated with the cognitive processing of music and emotion recognition and regulation. Moreover, happiness in music was associated with activity in the bilateral auditory cortex, left parahippocampal gyrus, and supplementary motor area, whereas the negative emotions of sadness and fear corresponded with activation of the left anterior cingulate and middle frontal gyrus and down-regulation of the orbitofrontal cortex. Our study demonstrates for the first time in healthy subjects the neural underpinnings of the implicit processing of brief musical emotions, particularly in frontoparietal, dorsolateral prefrontal, and striatal areas of the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Schuppert, M; Münte, T F; Wieringa, B M; Altenmüller, E
2000-03-01
Perceptual musical functions were investigated in patients suffering from unilateral cerebrovascular cortical lesions. Using MIDI (Musical Instrument Digital Interface) technique, a standardized short test battery was established that covers local (analytical) as well as global perceptual mechanisms. These represent the principal cognitive strategies in melodic and temporal musical information processing (local, interval and rhythm; global, contour and metre). Of the participating brain-damaged patients, a total of 69% presented with post-lesional impairments in music perception. Left-hemisphere-damaged patients showed significant deficits in the discrimination of local as well as global structures in both melodic and temporal information processing. Right-hemisphere-damaged patients also revealed an overall impairment of music perception, reaching significance in the temporal conditions. Detailed analysis outlined a hierarchical organization, with an initial right-hemisphere recognition of contour and metre followed by identification of interval and rhythm via left-hemisphere subsystems. Patterns of dissociated and associated melodic and temporal deficits indicate autonomous, yet partially integrated neural subsystems underlying the processing of melodic and temporal stimuli. In conclusion, these data contradict a strong hemispheric specificity for music perception, but indicate cross-hemisphere, fragmented neural substrates underlying local and global musical information processing in the melodic and temporal dimensions. Due to the diverse profiles of neuropsychological deficits revealed in earlier investigations as well as in this study, individual aspects of musicality and musical behaviour very likely contribute to the definite formation of these widely distributed neural networks.
Whipple, Christina M; Gfeller, Kate; Driscoll, Virginia; Oleson, Jacob; McGregor, Karla
2015-01-01
Effective musical communication requires conveyance of the intended message in a manner perceptible to the receiver. Communication disorders that impair transmitting or decoding of structural features of music (e.g., pitch, timbre) and/or symbolic representation may result in atypical musical communication, which can have a negative impact on music therapy interventions. This study compared recognition of symbolic representation of emotions or movements in music by two groups of children with different communicative characteristics: severe to profound hearing loss (using cochlear implants [CI]) and autism spectrum disorder (ASD). Their responses were compared to those of children with typical-development and normal hearing (TD-NH). Accuracy was examined as a function of communicative status, emotional or movement category, and individual characteristics. Participants listened to recorded musical excerpts conveying emotions or movements and matched them with labels. Measures relevant to auditory and/or language function were also gathered. There was no significant difference between the ASD and TD-NH groups in identification of musical emotions or movements. However, the CI group was significantly less accurate than the other two groups in identification of both emotions and movements. Mixed effects logistic regression revealed different patterns of accuracy for specific emotions as a function of group. Conveyance of emotions or movements through music may be decoded differently by persons with different types of communication disorders. Because music is the primary therapeutic tool in music therapy sessions, clinicians should consider these differential abilities when selecting music for clinical interventions focusing on emotions or movement. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
[Polar and non polar notations of refraction].
Touzeau, O; Gaujoux, T; Costantini, E; Borderie, V; Laroche, L
2010-01-01
Refraction can be expressed by four polar notations which correspond to four different combinations of spherical or cylindrical lenses. Conventional expressions of refraction (plus and minus cylinder notation) are described by sphere, cylinder, and axis. In the plus cylinder notation, the axis visualizes the most powerful meridian. The axis usually corresponds to the bow tie axis in curvature maps. Plus cylinder notation is also valuable for all relaxing procedures (i.e., selective suture ablation, arcuate keratotomy, etc.). In the cross-cylinder notation, two orthogonal cylinders can describe (without the sphere component) the actual refraction of both the principal meridians. This notation must be made before performing the vertex calculation. Using an association of a Jackson cross-cylinder and a spherical equivalent, refraction can be broken down into two pure components: astigmatism and sphere. All polar notations of refraction may perfectly characterize a single refraction but are not suitable for statistical analysis, which requires nonpolar expression. After doubling the axis, a rectangular projection breaks down the Jackson cross-cylinder, which has a polar axis, into two Jackson cross-cylinders on the 0 degrees /90 degrees and 45 degrees /135 degrees axis. This procedure results in the loss of the directional nature of the data. Refraction can be written in a nonpolar notation by three rectangular coordinates (x,y,z), which can also represent the spherocylinder by one point in a dioptric space. These three independent (orthogonal) variables have a concrete optical significance: a spherical component, a direct/inverse (WTR/ATR) component, and an oblique component of the astigmatism. Finally, nonpolar notations are useful for statistical analysis and graphical representation of refraction. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
Aspect-Oriented Programming is Quantification and Implicit Invocation
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Friedman, Daniel P.; Koga, Dennis (Technical Monitor)
2001-01-01
We propose that the distinguishing characteristic of Aspect-Oriented Programming (AOP) languages is that they allow programming by making quantified programmatic assertions over programs that lack local notation indicating the invocation of these assertions. This suggests that AOP systems can be analyzed with respect to three critical dimensions: the kinds of quantifications allowed, the nature of the interactions that can be asserted, and the mechanism for combining base-level actions with asserted actions. Consequences of this perspective are the recognition that certain systems are not AOP and that some mechanisms are metabolism: they are sufficiently expressive to allow straightforwardly programming an AOP system within them.
1990-04-18
COVERED 14, DATE OF REPORT (Year. Month, Day) 15. PAGE COUNT Interim FROM _ _ TO April 18, 1990 16. SUPPLEMENTARY NOTATION 17 , COSATI CODES 18. SUBJECT...to epoxide 17 was carried out ,o by using the method of Koppenhoefer and Schurig.241-2 Our epoxides 17 where R was isopropyl, isobutyl, and Scheme i... 17 with diethylene glycol and a catalytic amount of sodium was carried out as reported for the preparation of the chiral dimethyl- tetraethylene glycol
Hurst, Michelle A; Cordes, Sara
2018-04-01
Fraction and decimal concepts are notoriously difficult for children to learn yet are a major component of elementary and middle school math curriculum and an important prerequisite for higher order mathematics (i.e., algebra). Thus, recently there has been a push to understand how children think about rational number magnitudes in order to understand how to promote rational number understanding. However, prior work investigating these questions has focused almost exclusively on fraction notation, overlooking the open questions of how children integrate rational number magnitudes presented in distinct notations (i.e., fractions, decimals, and whole numbers) and whether understanding of these distinct notations may independently contribute to pre-algebra ability. In the current study, we investigated rational number magnitude and arithmetic performance in both fraction and decimal notation in fourth- to seventh-grade children. We then explored how these measures of rational number ability predicted pre-algebra ability. Results reveal that children do represent the magnitudes of fractions and decimals as falling within a single numerical continuum and that, despite greater experience with fraction notation, children are more accurate when processing decimal notation than when processing fraction notation. Regression analyses revealed that both magnitude and arithmetic performance predicted pre-algebra ability, but magnitude understanding may be particularly unique and depend on notation. The educational implications of differences between children in the current study and previous work with adults are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
Grose, John H; Buss, Emily; Hall, Joseph W
2017-01-01
The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.
Large-Scale Pattern Discovery in Music
NASA Astrophysics Data System (ADS)
Bertin-Mahieux, Thierry
This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
NASA Astrophysics Data System (ADS)
Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert
2005-12-01
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Temporal stability of music perception and appraisal scores of adult cochlear implant recipients.
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Knutson, John F
2010-01-01
An extensive body of literature indicates that cochlear implants (CIs) are effective in supporting speech perception of persons with severe to profound hearing losses who do not benefit to any great extent from conventional hearing aids. Adult CI recipients tend to show significant improvement in speech perception within 3 mo following implantation as a result of mere experience. Furthermore, CI recipients continue to show modest improvement as long as 5yr postimplantation. In contrast, data taken from single testing protocols of music perception and appraisal indicate that CIs are less than ideal in transmitting important structural features of music, such as pitch, melody, and timbre. However, there is presently little information documenting changes in music perception or appraisal over extended time as a result of mere experience. This study examined two basic questions: (1) Do adult CI recipients show significant improvement in perceptual acuity or appraisal of specific music listening tasks when tested in two consecutive years? (2) If there are tasks for which CI recipients show significant improvement with time, are there particular demographic variables that predict those CI recipients most likely to show improvement with extended CI use? A longitudinal cohort study. Implant recipients return annually for visits to the clinic. The study included 209 adult cochlear implant recipients with at least 9 mo implant experience before their first year measurement. Outcomes were measured on the patient's annual visit in two consecutive years. Paired t-tests were used to test for significant improvement from one year to the next. Those variables demonstrating significant improvement were subjected to regression analyses performed to detect the demographic variables useful in predicting said improvement. There were no significant differences in music perception outcomes as a function of type of device or processing strategy used. Only familiar melody recognition (FMR) and recognition of melody excerpts with lyrics (MERT-L) showed significant improvement from one year to the next. After controlling for the baseline value, hearing aid use, months of use, music listening habits after implantation, and formal musical training in elementary school were significant predictors of FMR improvement. Bilateral CI use, formal musical training in high school and beyond, and a measure of sequential cognitive processing were significant predictors of MERT-L improvement. These adult CI recipients as a result of mere experience demonstrated fairly consistent music perception and appraisal on measures gathered in two consecutive years. Gains made tend to be modest, and can be associated with characteristics such as use of hearing aids, listening experiences, or bilateral use (in the case of lyrics). These results have implications for counseling of CI recipients with regard to realistic expectations and strategies for enhancing music perception and enjoyment.
Giordano, Bruno L.; Egermann, Hauke; Bresin, Roberto
2014-01-01
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions. PMID:25551392
Melody recognition by two-month-old infants.
Plantinga, Judy; Trainor, Laurel J
2009-02-01
Music is part of an infant's world even before birth, and caregivers around the world sing to infants. Yet, there has been little research into the musical abilities or preferences of infants younger than 5 months. In this study, the head turn preference procedure used with older infants was adapted into an eye-movement preference procedure so that the ability of 2-month-old infants to remember a short melody could be tested. The results show that with minimal familiarization, 2-month-old infants remember a short melody and can discriminate it from a similar melody.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
A nonmusician with severe Alzheimer's dementia learns a new song.
Baird, Amee; Umbach, Heidi; Thompson, William Forde
2017-02-01
The hallmark symptom of Alzheimer's Dementia (AD) is impaired memory, but memory for familiar music can be preserved. We explored whether a non-musician with severe AD could learn a new song. A 91 year old woman (NC) with severe AD was taught an unfamiliar song. We assessed her delayed song recall (24 hours and 2 weeks), music cognition, two word recall (presented within a familiar song lyric, a famous proverb, or as a word stem completion task), and lyrics and proverb completion. NC's music cognition (pitch and rhythm perception, recognition of familiar music, completion of lyrics) was relatively preserved. She recalled 0/2 words presented in song lyrics or proverbs, but 2/2 word stems, suggesting intact implicit memory function. She could sing along to the newly learnt song on immediate and delayed recall (24 hours and 2 weeks later), and with intermittent prompting could sing it alone. This is the first detailed study of preserved ability to learn a new song in a non-musician with severe AD, and contributes to observations of relatively preserved musical abilities in people with dementia.
New algorithms to represent complex pseudoknotted RNA structures in dot-bracket notation.
Antczak, Maciej; Popenda, Mariusz; Zok, Tomasz; Zurkowski, Michal; Adamiak, Ryszard W; Szachniuk, Marta
2018-04-15
Understanding the formation, architecture and roles of pseudoknots in RNA structures are one of the most difficult challenges in RNA computational biology and structural bioinformatics. Methods predicting pseudoknots typically perform this with poor accuracy, often despite experimental data incorporation. Existing bioinformatic approaches differ in terms of pseudoknots' recognition and revealing their nature. A few ways of pseudoknot classification exist, most common ones refer to a genus or order. Following the latter one, we propose new algorithms that identify pseudoknots in RNA structure provided in BPSEQ format, determine their order and encode in dot-bracket-letter notation. The proposed encoding aims to illustrate the hierarchy of RNA folding. New algorithms are based on dynamic programming and hybrid (combining exhaustive search and random walk) approaches. They evolved from elementary algorithm implemented within the workflow of RNA FRABASE 1.0, our database of RNA structure fragments. They use different scoring functions to rank dissimilar dot-bracket representations of RNA structure. Computational experiments show an advantage of new methods over the others, especially for large RNA structures. Presented algorithms have been implemented as new functionality of RNApdbee webserver and are ready to use at http://rnapdbee.cs.put.poznan.pl. mszachniuk@cs.put.poznan.pl. Supplementary data are available at Bioinformatics online.
Are there pre-existing neural, cognitive, or motoric markers for musical ability?
Norton, Andrea; Winner, Ellen; Cronin, Karl; Overy, Katie; Lee, Dennis J; Schlaug, Gottfried
2005-11-01
Adult musician's brains show structural enlargements, but it is not known whether these are inborn or a consequence of long-term training. In addition, music training in childhood has been shown to have positive effects on visual-spatial and verbal outcomes. However, it is not known whether pre-existing advantages in these skills are found in children who choose to study a musical instrument nor is it known whether there are pre-existing associations between music and any of these outcome measures that could help explain the training effects. To answer these questions, we compared 5- to 7-year-olds beginning piano or string lessons (n=39) with 5- to 7-year-olds not beginning instrumental training (n=31). All children received a series of tests (visual-spatial, non-verbal reasoning, verbal, motor, and musical) and underwent magnetic resonance imaging. We found no pre-existing neural, cognitive, motor, or musical differences between groups and no correlations (after correction for multiple analyses) between music perceptual skills and any brain or visual-spatial measures. However, correlations were found between music perceptual skills and both non-verbal reasoning and phonemic awareness. Such pre-existing correlations suggest similarities in auditory and visual pattern recognition as well a sharing of the neural substrates for language and music processing, most likely due to innate abilities or implicit learning during early development. This baseline study lays the groundwork for an ongoing longitudinal study addressing the effects of intensive musical training on brain and cognitive development, and making it possible to look retroactively at the brain and cognitive development of those children who emerge showing exceptional musical talent.
Crystallographic and Spectroscopic Symmetry Notations.
ERIC Educational Resources Information Center
Sharma, B. D.
1982-01-01
Compares Schoenflies and Hermann-Mauguin notations of symmetry. Although the former (used by spectroscopists) and latter (used by crystallographers) both describe the same symmetry, there are distinct differences in the manner of description which may lead to confusion in correlating the two notations. (Author/JN)
Service Oriented Architecture for Coast Guard Command and Control
2007-03-01
Operations BPEL4WS The Business Process Execution Language for Web Services BPMN Business Process Modeling Notation CASP Computer Aided Search Planning...Business Process Modeling Notation ( BPMN ) provides a standardized graphical notation for drawing business processes in a workflow. Software tools
2011-01-01
Background The Molecular Interaction Map (MIM) notation offers a standard set of symbols and rules on their usage for the depiction of cellular signaling network diagrams. Such diagrams are essential for disseminating biological information in a concise manner. A lack of software tools for the notation restricts wider usage of the notation. Development of software is facilitated by a more detailed specification regarding software requirements than has previously existed for the MIM notation. Results A formal implementation of the MIM notation was developed based on a core set of previously defined glyphs. This implementation provides a detailed specification of the properties of the elements of the MIM notation. Building upon this specification, a machine-readable format is provided as a standardized mechanism for the storage and exchange of MIM diagrams. This new format is accompanied by a Java-based application programming interface to help software developers to integrate MIM support into software projects. A validation mechanism is also provided to determine whether MIM datasets are in accordance with syntax rules provided by the new specification. Conclusions The work presented here provides key foundational components to promote software development for the MIM notation. These components will speed up the development of interoperable tools supporting the MIM notation and will aid in the translation of data stored in MIM diagrams to other standardized formats. Several projects utilizing this implementation of the notation are outlined herein. The MIM specification is available as an additional file to this publication. Source code, libraries, documentation, and examples are available at http://discover.nci.nih.gov/mim. PMID:21586134
Luna, Augustin; Karac, Evrim I; Sunshine, Margot; Chang, Lucas; Nussinov, Ruth; Aladjem, Mirit I; Kohn, Kurt W
2011-05-17
The Molecular Interaction Map (MIM) notation offers a standard set of symbols and rules on their usage for the depiction of cellular signaling network diagrams. Such diagrams are essential for disseminating biological information in a concise manner. A lack of software tools for the notation restricts wider usage of the notation. Development of software is facilitated by a more detailed specification regarding software requirements than has previously existed for the MIM notation. A formal implementation of the MIM notation was developed based on a core set of previously defined glyphs. This implementation provides a detailed specification of the properties of the elements of the MIM notation. Building upon this specification, a machine-readable format is provided as a standardized mechanism for the storage and exchange of MIM diagrams. This new format is accompanied by a Java-based application programming interface to help software developers to integrate MIM support into software projects. A validation mechanism is also provided to determine whether MIM datasets are in accordance with syntax rules provided by the new specification. The work presented here provides key foundational components to promote software development for the MIM notation. These components will speed up the development of interoperable tools supporting the MIM notation and will aid in the translation of data stored in MIM diagrams to other standardized formats. Several projects utilizing this implementation of the notation are outlined herein. The MIM specification is available as an additional file to this publication. Source code, libraries, documentation, and examples are available at http://discover.nci.nih.gov/mim.
The practical and pedagogical advantages of an ambigraphic nucleic acid notation.
Rozak, David A
2006-01-01
The universally applied IUPAC notation for nucleic acids was adopted primarily to facilitate the mental association of G, A, T, C, and the related ambiguity characters with the bases they represent. However it is possible to create a notation that offers greater support for the basic manipulations and analyses to which genetic sequences frequently are subjected. By designing a nucleic acid notation around ambigrams, it is possible to simplify the frequently applied process of reverse complementation and aid the visualization of palindromes. The ambigraphic notation presented here also uses common orthographic features such as stems and loops to highlight guanine and cytosine rich regions, support the derivation of ambiguity characters, and aid educators in teaching the fundamentals of molecular genetics.
NASA Astrophysics Data System (ADS)
Marshman, Emily; Singh, Chandralekha
2018-01-01
In quantum mechanics, for every physical observable, there is a corresponding Hermitian operator. According to the most common interpretation of quantum mechanics, measurement of an observable collapses the quantum state into one of the possible eigenstates of the operator and the corresponding eigenvalue is measured. Since Dirac notation is an elegant notation that is commonly used in upper-level quantum mechanics, it is important that students learn to express quantum operators corresponding to observables in Dirac notation in order to apply the quantum formalism effectively in diverse situations. Here we focus on an investigation that suggests that, even though Dirac notation is used extensively, many advanced undergraduate and PhD students in physics have difficulty expressing the identity operator and other Hermitian operators corresponding to physical observables in Dirac notation. We first describe the difficulties students have with expressing the identity operator and a generic Hermitian operator corresponding to an observable in Dirac notation. We then discuss how the difficulties found via written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of these concepts. The QuILT strives to help students become proficient in expressing the identity operator and a generic Hermitian operator corresponding to an observable in Dirac notation. We also discuss the effectiveness of the QuILT based on in-class evaluations.
Notation Confusion of Symmetry Species for Molecules with Several Large-Amplitude Internal Motions
NASA Astrophysics Data System (ADS)
Groner, P.
2011-06-01
The Mulliken convention has become the standard notation for symmetry species (irreducible representations) of point groups for quasi-rigid molecules. No such convention exists for symmetry species of symmetry groups for semi-rigid or non-rigid molecules with large amplitude internal motions (LAMs). As a result, we have a situation where we create notations in a do-it-yourself fashion or adopt them from the literature, sometimes even without proper reference to its derivation or to the character table on which it is based. This may be just a nuisance for those who are comfortable enough with group theory and molecular symmetry groups to figure "it" out, but it represents a real problem for everybody else. The notation confusion is illustrated with examples from the literature (both old and new) on molecules with two or more LAMs. Most authors use the notation introduced by Myers and Wilson for molecules such as acetone or propane. No universal notation is in use for molecules with two methyl groups but lower overall symmetry. For example, the notation G_1_8 is used for one of these groups. As it turns out, different people use the same notation for different groups. This presentation is an attempt to bring some light into the dark and to combat confusion with a call for an anti-confusion convention. R. S. Mulliken, Phys. Rev. 43, 279 (1933). R. J. Myers, E. B. Wilson, J. Chem. Phys. 33, 186 (1960).
Autism, emotion recognition and the mirror neuron system: the case of music.
Molnar-Szakacs, Istvan; Wang, Martha J; Laugeson, Elizabeth A; Overy, Katie; Wu, Wai-Ling; Piggot, Judith
2009-11-16
Understanding emotions is fundamental to our ability to navigate and thrive in a complex world of human social interaction. Individuals with Autism Spectrum Disorders (ASD) are known to experience difficulties with the communication and understanding of emotion, such as the nonverbal expression of emotion and the interpretation of emotions of others from facial expressions and body language. These deficits often lead to loneliness and isolation from peers, and social withdrawal from the environment in general. In the case of music however, there is evidence to suggest that individuals with ASD do not have difficulties recognizing simple emotions. In addition, individuals with ASD have been found to show normal and even superior abilities with specific aspects of music processing, and often show strong preferences towards music. It is possible these varying abilities with different types of expressive communication may be related to a neural system referred to as the mirror neuron system (MNS), which has been proposed as deficient in individuals with autism. Music's power to stimulate emotions and intensify our social experiences might activate the MNS in individuals with ASD, and thus provide a neural foundation for music as an effective therapeutic tool. In this review, we present literature on the ontogeny of emotion processing in typical development and in individuals with ASD, with a focus on the case of music.
Are Arabic and Verbal Numbers Processed in Different Ways?
ERIC Educational Resources Information Center
Kadosh, Roi Cohen; Henik, Avishai; Rubinsten, Orly
2008-01-01
Four experiments were conducted in order to examine effects of notation--Arabic and verbal numbers--on relevant and irrelevant numerical processing. In Experiment 1, notation interacted with the numerical distance effect, and irrelevant physical size affected numerical processing (i.e., size congruity effect) for both notations but to a lesser…
Scientific Notation Watercolor
ERIC Educational Resources Information Center
Linford, Kyle; Oltman, Kathleen; Daisey, Peggy
2016-01-01
(Purpose) The purpose of this paper is to describe visual literacy, an adapted version of Visual Thinking Strategy (VTS), and an art-integrated middle school mathematics lesson about scientific notation. The intent of this lesson was to provide students with a real life use of scientific notation and exponents, and to motivate them to apply their…
A Notation for Rapid Specification of Information Visualization
ERIC Educational Resources Information Center
Lee, Sang Yun
2013-01-01
This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…
Composing alarms: considering the musical aspects of auditory alarm design.
Gillard, Jessica; Schutz, Michael
2016-12-01
Short melodies are commonly linked to referents in jingles, ringtones, movie themes, and even auditory displays (i.e., sounds used in human-computer interactions). While melody associations can be quite effective, auditory alarms in medical devices are generally poorly learned and highly confused. Here, we draw on approaches and stimuli from both music cognition (melody recognition) and human factors (alarm design) to analyze the patterns of confusions in a paired-associate alarm-learning task involving both a standardized melodic alarm set (Experiment 1) and a set of novel melodies (Experiment 2). Although contour played a role in confusions (consistent with previous research), we observed several cases where melodies with similar contours were rarely confused - melodies holding musically distinctive features. This exploratory work suggests that salient features formed by an alarm's melodic structure (such as repeated notes, distinct contours, and easily recognizable intervals) can increase the likelihood of correct alarm identification. We conclude that the use of musical principles and features may help future efforts to improve the design of auditory alarms.
Symbolic, Nonsymbolic and Conceptual: An Across-Notation Study on the Space Mapping of Numerals.
Zhang, Yu; You, Xuqun; Zhu, Rongjuan
2016-07-01
Previous studies suggested that there are interconnections between two numeral modalities of symbolic notation and nonsymbolic notation (array of dots), differences and similarities of the processing, and representation of the two modalities have both been found in previous research. However, whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation is still uninvestigated. The present study aims to examine whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation; especially how zero, as both a symbolic magnitude numeral and a nonsymbolic conceptual numeral, mapping onto space; and if the mapping happens automatically at an early stage of the numeral information processing. Results of the two experiments demonstrate that the low-level processing of symbolic numerals including zero and nonsymbolic numerals except zero can mapping onto space, whereas the low-level processing of nonsymbolic zero as a semantic conceptual numeral cannot mapping onto space, which indicating the specialty of zero in the numeral domain. The present study indicates that the processing of non-semantic numerals can mapping onto space, whereas semantic conceptual numerals cannot mapping onto space. © The Author(s) 2016.
Temporal Stability of Music Perception and Appraisal Scores of Adult Cochlear Implant Recipients
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Knutson, John F.
2010-01-01
Background An extensive body of literature indicates that cochlear implants are effective in supporting speech perception of persons with severe to profound hearing losses who do not benefit to any great extent from conventional hearing aids. Adult CI recipients tend to show significant improvement in speech perception within 3 months following implantation as a result of mere experience. Furthermore, CI recipients continue to show modest improvement as long as 5 years post implantation. In contrast, data taken from single testing protocols of music perception and appraisal indicate that CIs are less than ideal in transmitting important structural features of music, such as pitch, melody and timbre. However, there is presently little information documenting changes in music perception or appraisal over extended time as a result of mere experience. Purpose This study examined two basic questions: 1) Do adult CI recipients show significant improvement in perceptual acuity or appraisal of specific music listening tasks when tested in two consecutive years? 2) If there are tasks for which CI recipients show significant improvement with time, are there particular demographic variables that predict those CI recipients most likely to show improvement with extended CI use? Research Design A longitudinal cohort study. Implant recipients return annually for visits to the clinic. Study Sample The study included 209 adult cochlear implant recipients with at least 9 months implant experience before their first year measurement. Data collection and analysis Outcomes were measured on the patient’s annual visit in two consecutive years. Paired t-tests were used to test for significant improvement from one year to the next. Those variables demonstrating significant improvement were subjected to regression analyses performed to detect the demographic variables useful in predicting said improvement. Results There were no significant differences in music perception outcomes as a function of type of device or processing strategy used. Only familiar melody recognition (FMR) and recognition of melody excerpts with lyrics (MERT-L) showed significant improvement from one year to the next. After controlling for the baseline value, hearing aid use, months of use, music listening habits after implantation and formal musical training in elementary school were significant predictors of FMR improvement. Bilateral CI use, formal musical training in high school and beyond, and a measure of sequential cognitive processing were significant predictors of MERT-L improvement. Conclusions These adult CI recipients as a result of mere experience demonstrated fairly consistent music perception and appraisal on measures gathered in two consecutive years. Gains made tend to be modest, and can be associated with characteristics such as use of hearing aids, listening experiences, or bilateral use (in the case of lyrics). These results have implications for counseling of CI recipients with regard to realistic expectations and strategies for enhancing music perception and enjoyment. PMID:20085197
A coupled duration-focused architecture for real-time music-to-score alignment.
Cont, Arshia
2010-06-01
The capacity for real-time synchronization and coordination is a common ability among trained musicians performing a music score that presents an interesting challenge for machine intelligence. Compared to speech recognition, which has influenced many music information retrieval systems, music's temporal dynamics and complexity pose challenging problems to common approximations regarding time modeling of data streams. In this paper, we propose a design for a real-time music-to-score alignment system. Given a live recording of a musician playing a music score, the system is capable of following the musician in real time within the score and decoding the tempo (or pace) of its performance. The proposed design features two coupled audio and tempo agents within a unique probabilistic inference framework that adaptively updates its parameters based on the real-time context. Online decoding is achieved through the collaboration of the coupled agents in a Hidden Hybrid Markov/semi-Markov framework, where prediction feedback of one agent affects the behavior of the other. We perform evaluations for both real-time alignment and the proposed temporal model. An implementation of the presented system has been widely used in real concert situations worldwide and the readers are encouraged to access the actual system and experiment the results.
Using Design Principles to Consider Representation of the Hand in Some Notation Systems
ERIC Educational Resources Information Center
Hochgesang, Julie A.
2014-01-01
Linguists have long recognized the descriptive limitations of Stokoe notation, currently the most commonly used system for phonetic or phonological transcription, but continue using it because of its widespread influence (e.g., Siedlecki and Bonvillian, 2000). With the emergence of newer notation systems, the field will benefit from a discussion…
NASA Technical Reports Server (NTRS)
Sirlin, Samuel W.
1993-01-01
Eight-page report describes systems of notation used most commonly to represent tensors of various ranks, with emphasis on tensors in Cartesian coordinate systems. Serves as introductory or refresher text for scientists, engineers, and others familiar with basic concepts of coordinate systems, vectors, and partial derivatives. Indicial tensor, vector, dyadic, and matrix notations, and relationships among them described.
Skin notation in the context of workplace exposure standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scansetti, G.; Piolatto, G.; Rubino, G.F.
1988-01-01
In the establishment of workplace exposure standards, the potential for cutaneous absorption is taken into consideration through the addition of skin notation to the relevant substance. In the TLVs Documentation (ACGIH, 1986) dermal lethal dose to 50% (LD50) or human data are the bases for the assignment of skin notation to 91 of 168 substances. For the other substances, the skin attribution seems to be based on undocumented statements in 24 (14.5%), skin effects in 13 (8%), and analogy in 7 (4%), while in the remaining 33 (20%) any reference is lacking as to the basis for notation of themore » cutaneous route of entry. Furthermore, since the established cut-off value of 2 g/kg is sometimes bypassed when a notation is added or omitted, the use of dermal LD50 is perplexing. Given the relevance of the skin notation for the validation of threshold limit values (TLVs) in the workplace, a full examination and citation of all available scientific data are recommended when establishing the TLV of substances absorbable through the skin.« less
Rational-number comparison across notation: Fractions, decimals, and whole numbers.
Hurst, Michelle; Cordes, Sara
2016-02-01
Although fractions, decimals, and whole numbers can be used to represent the same rational-number values, it is unclear whether adults conceive of these rational-number magnitudes as lying along the same ordered mental continuum. In the current study, we investigated whether adults' processing of rational-number magnitudes in fraction, decimal, and whole-number notation show systematic ratio-dependent responding characteristic of an integrated mental continuum. Both reaction time (RT) and eye-tracking data from a number-magnitude comparison task revealed ratio-dependent performance when adults compared the relative magnitudes of rational numbers, both within the same notation (e.g., fractions vs. fractions) and across different notations (e.g., fractions vs. decimals), pointing to an integrated mental continuum for rational numbers across notation types. In addition, eye-tracking analyses provided evidence of an implicit whole-number bias when we compared values in fraction notation, and individual differences in this whole-number bias were related to the individual's performance on a fraction arithmetic task. Implications of our results for both cognitive development research and math education are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Role of the Baldwin Effect in the Evolution of Human Musicality.
Podlipniak, Piotr
2017-01-01
From the biological perspective human musicality is the term referred to as a set of abilities which enable the recognition and production of music. Since music is a complex phenomenon which consists of features that represent different stages of the evolution of human auditory abilities, the question concerning the evolutionary origin of music must focus mainly on music specific properties and their possible biological function or functions. What usually differentiates music from other forms of human sound expressions is a syntactically organized structure based on pitch classes and rhythmic units measured in reference to musical pulse. This structure is an auditory (not acoustical) phenomenon, meaning that it is a human-specific interpretation of sounds achieved thanks to certain characteristics of the nervous system. There is historical and cross-cultural diversity of this structure which indicates that learning is an important part of the development of human musicality. However, the fact that there is no culture without music, the syntax of which is implicitly learned and easily recognizable, suggests that human musicality may be an adaptive phenomenon. If the use of syntactically organized structure as a communicative phenomenon were adaptive it would be only in circumstances in which this structure is recognizable by more than one individual. Therefore, there is a problem to explain the adaptive value of an ability to recognize a syntactically organized structure that appeared accidentally as the result of mutation or recombination in an environment without a syntactically organized structure. The possible solution could be explained by the Baldwin effect in which a culturally invented trait is transformed into an instinctive trait by the means of natural selection. It is proposed that in the beginning musical structure was invented and learned thanks to neural plasticity. Because structurally organized music appeared adaptive (phenotypic adaptation) e.g., as a tool of social consolidation, our predecessors started to spend a lot of time and energy on music. In such circumstances, accidentally one individual was born with the genetically controlled development of new neural circuitry which allowed him or her to learn music faster and with less energy use.
The Role of the Baldwin Effect in the Evolution of Human Musicality
Podlipniak, Piotr
2017-01-01
From the biological perspective human musicality is the term referred to as a set of abilities which enable the recognition and production of music. Since music is a complex phenomenon which consists of features that represent different stages of the evolution of human auditory abilities, the question concerning the evolutionary origin of music must focus mainly on music specific properties and their possible biological function or functions. What usually differentiates music from other forms of human sound expressions is a syntactically organized structure based on pitch classes and rhythmic units measured in reference to musical pulse. This structure is an auditory (not acoustical) phenomenon, meaning that it is a human-specific interpretation of sounds achieved thanks to certain characteristics of the nervous system. There is historical and cross-cultural diversity of this structure which indicates that learning is an important part of the development of human musicality. However, the fact that there is no culture without music, the syntax of which is implicitly learned and easily recognizable, suggests that human musicality may be an adaptive phenomenon. If the use of syntactically organized structure as a communicative phenomenon were adaptive it would be only in circumstances in which this structure is recognizable by more than one individual. Therefore, there is a problem to explain the adaptive value of an ability to recognize a syntactically organized structure that appeared accidentally as the result of mutation or recombination in an environment without a syntactically organized structure. The possible solution could be explained by the Baldwin effect in which a culturally invented trait is transformed into an instinctive trait by the means of natural selection. It is proposed that in the beginning musical structure was invented and learned thanks to neural plasticity. Because structurally organized music appeared adaptive (phenotypic adaptation) e.g., as a tool of social consolidation, our predecessors started to spend a lot of time and energy on music. In such circumstances, accidentally one individual was born with the genetically controlled development of new neural circuitry which allowed him or her to learn music faster and with less energy use. PMID:29056895
Higgins, Paul; Searchfield, Grant; Coad, Gavin
2012-06-01
The aim of this study was to determine which level-dependent hearing aid digital signal-processing strategy (DSP) participants preferred when listening to music and/or performing a speech-in-noise task. Two receiver-in-the-ear hearing aids were compared: one using 32-channel adaptive dynamic range optimization (ADRO) and the other wide dynamic range compression (WDRC) incorporating dual fast (4 channel) and slow (15 channel) processing. The manufacturers' first-fit settings based on participants' audiograms were used in both cases. Results were obtained from 18 participants on a quick speech-in-noise (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) task and for 3 music listening conditions (classical, jazz, and rock). Participants preferred the quality of music and performed better at the QuickSIN task using the hearing aids with ADRO processing. A potential reason for the better performance of the ADRO hearing aids was less fluctuation in output with change in sound dynamics. ADRO processing has advantages for both music quality and speech recognition in noise over the multichannel WDRC processing that was used in the study. Further evaluations of which DSP aspects contribute to listener preference are required.
The impact of music on learning and consolidation of novel words.
Tamminen, Jakke; Rastle, Kathleen; Darby, Jess; Lucas, Rebecca; Williamson, Victoria J
2017-01-01
Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.
19 CFR 125.34 - Countersigning of documents and notation of bad order or discrepancy.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Countersigning of documents and notation of bad... and Receipt § 125.34 Countersigning of documents and notation of bad order or discrepancy. When a... and shall note thereon any bad order or discrepancy. When available, the importing carrier's tally...
From Depiction to Notation: How Children Use Symbols to Represent Objects and Events
ERIC Educational Resources Information Center
Eskritt, Michelle; Olson, David
2012-01-01
The purpose of the present study was to explore children's understanding of external symbols by examining the relationship between children's production and comprehension of graphic notations and verbal messages. Fifty-six children between the ages of 5 and 7 years were asked to produce both notations and a spoken message relaying to their…
Cullington, Helen E; Zeng, Fan-Gang
2011-02-01
Despite excellent performance in speech recognition in quiet, most cochlear implant users have great difficulty with speech recognition in noise, music perception, identifying tone of voice, and discriminating different talkers. This may be partly due to the pitch coding in cochlear implant speech processing. Most current speech processing strategies use only the envelope information; the temporal fine structure is discarded. One way to improve electric pitch perception is to use residual acoustic hearing via a hearing aid on the nonimplanted ear (bimodal hearing). This study aimed to test the hypothesis that bimodal users would perform better than bilateral cochlear implant users on tasks requiring good pitch perception. Four pitch-related tasks were used. 1. Hearing in Noise Test (HINT) sentences spoken by a male talker with a competing female, male, or child talker. 2. Montreal Battery of Evaluation of Amusia. This is a music test with six subtests examining pitch, rhythm and timing perception, and musical memory. 3. Aprosodia Battery. This has five subtests evaluating aspects of affective prosody and recognition of sarcasm. 4. Talker identification using vowels spoken by 10 different talkers (three men, three women, two boys, and two girls). Bilateral cochlear implant users were chosen as the comparison group. Thirteen bimodal and 13 bilateral adult cochlear implant users were recruited; all had good speech perception in quiet. There were no significant differences between the mean scores of the bimodal and bilateral groups on any of the tests, although the bimodal group did perform better than the bilateral group on almost all tests. Performance on the different pitch-related tasks was not correlated, meaning that if a subject performed one task well they would not necessarily perform well on another. The correlation between the bimodal users' hearing threshold levels in the aided ear and their performance on these tasks was weak. Although the bimodal cochlear implant group performed better than the bilateral group on most parts of the four pitch-related tests, the differences were not statistically significant. The lack of correlation between test results shows that the tasks used are not simply providing a measure of pitch ability. Even if the bimodal users have better pitch perception, the real-world tasks used are reflecting more diverse skills than pitch. This research adds to the existing speech perception, language, and localization studies that show no significant difference between bimodal and bilateral cochlear implant users.
NASA Astrophysics Data System (ADS)
Walls, Kimberly Kyle Curley
1992-01-01
Musical expression is largely dependent upon accentuation, yet there have been few attempts to study the perception of dynamic accent in music or to relate the results of psychoacoustical research in intensity to realistic musical situations. The purpose of the experiment was to estimate the relationships among (a) the intensity increment in dB(A) required to meet an 80% correct criterion in the perception of one accented tone embedded within a seven -tone isochronous series of identical 87 dB(A) snare drum timbre stimuli of 333 ms onsets (accent level, or AL), (b) the different limen (DL) for intensity increase to meet a 75% correct criterion in a 2AFC task for pairs for the stimuli, and (c) the age of the subjects, all of whom have normal audiograms. The 51 subjects (N = 51) were female nonmusicians ranging in age from 9 to 33 years (M = 17.98, SD = 5.21). The response tasks involved saying whether the second tone of each pair was louder or softer and circling the accented note in notated quarter notes. The stimuli production, the headphone calibration process, and their rationales were detailed. The global regression model was significant (F(2, 48) = 5.505, p =.007, R^2 =.187), and the relationship between AL and DL was not significant (F(1, 48) = 5.505, p =.197, R^2 change =.029), the relationship between AL and age was significant (F(1, 48) = 5.732, p =.021, R ^2 change =.098) at an alpha level of.05 and power calculated at.66 for a medium ES. It was concluded that accented sounds are easier to perceive in tone pairs than they are in a musical setting and that subject maturation improves performance of intensity judgement tasks. Suggestions for further research include shortening the length of the experimental session for younger subjects and increasing the number of intensity increments as well as using smaller increments to accommodate individual differences in perception.
Emotion Recognition From Singing Voices Using Contemporary Commercial Music and Classical Styles.
Hakanpää, Tua; Waaramaa, Teija; Laukkanen, Anne-Maria
2018-02-22
This study examines the recognition of emotion in contemporary commercial music (CCM) and classical styles of singing. This information may be useful in improving the training of interpretation in singing. This is an experimental comparative study. Thirteen singers (11 female, 2 male) with a minimum of 3 years' professional-level singing studies (in CCM or classical technique or both) participated. They sang at three pitches (females: a, e1, a1, males: one octave lower) expressing anger, sadness, joy, tenderness, and a neutral state. Twenty-nine listeners listened to 312 short (0.63- to 4.8-second) voice samples, 135 of which were sung using a classical singing technique and 165 of which were sung in a CCM style. The listeners were asked which emotion they heard. Activity and valence were derived from the chosen emotions. The percentage of correct recognitions out of all the answers in the listening test (N = 9048) was 30.2%. The recognition percentage for the CCM-style singing technique was higher (34.5%) than for the classical-style technique (24.5%). Valence and activation were better perceived than the emotions themselves, and activity was better recognized than valence. A higher pitch was more likely to be perceived as joy or anger, and a lower pitch as sorrow. Both valence and activation were better recognized in the female CCM samples than in the other samples. There are statistically significant differences in the recognition of emotions between classical and CCM styles of singing. Furthermore, in the singing voice, pitch affects the perception of emotions, and valence and activity are more easily recognized than emotions. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Schuller, Björn
2017-01-01
Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain. PMID:28658285
Coutinho, Eduardo; Schuller, Björn
2017-01-01
Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.
Music-induced changes in functional cerebral asymmetries.
Hausmann, Markus; Hodgetts, Sophie; Eerola, Tuomas
2016-04-01
After decades of research, it remains unclear whether emotion lateralization occurs because one hemisphere is dominant for processing the emotional content of the stimuli, or whether emotional stimuli activate lateralised networks associated with the subjective emotional experience. By using emotion-induction procedures, we investigated the effect of listening to happy and sad music on three well-established lateralization tasks. In a prestudy, Mozart's piano sonata (K. 448) and Beethoven's Moonlight Sonata were rated as the most happy and sad excerpts, respectively. Participants listened to either one emotional excerpt, or sat in silence before completing an emotional chimeric faces task (Experiment 1), visual line bisection task (Experiment 2) and a dichotic listening task (Experiment 3 and 4). Listening to happy music resulted in a reduced right hemispheric bias in facial emotion recognition (Experiment 1) and visuospatial attention (Experiment 2) and increased left hemispheric bias in language lateralization (Experiments 3 and 4). Although Experiments 1-3 revealed an increased positive emotional state after listening to happy music, mediation analyses revealed that the effect on hemispheric asymmetries was not mediated by music-induced emotional changes. The direct effect of music listening on lateralization was investigated in Experiment 4 in which tempo of the happy excerpt was manipulated by controlling for other acoustic features. However, the results of Experiment 4 made it rather unlikely that tempo is the critical cue accounting for the effects. We conclude that listening to music can affect functional cerebral asymmetries in well-established emotional and cognitive laterality tasks, independent of music-induced changes in the emotion state. Copyright © 2016 Elsevier Inc. All rights reserved.
Music Therapy Practice Status and Trends Worldwide: An International Survey Study.
Kern, Petra; Tague, Daniel B
2017-11-01
The field of music therapy is growing worldwide. While there is a wealth of country-specific information available, only a few have databased workforce censuses. Currently, little to no descriptive data exists about the global development of the profession. The purpose of this study was to obtain descriptive data about current demographics, practice status, and clinical trends to inform worldwide advocacy efforts, training needs, and the sustainable development of the field. Music therapists (N = 2,495) who were professional members of organizations affiliated with the World Federation of Music Therapy (WFMT) served as a sample for this international cross-sectional survey study. A 30-item online questionnaire was designed, pilot tested by key partners, and translated into seven languages. Researchers and key partners distributed the online survey through e-mail invitations and social media announcements. Professional music therapists worldwide are well-educated, mature professionals with adequate work experience, who are confident in providing high-quality services primarily in mental health, school, and geriatric settings. Due to ongoing challenges related to recognition and government regulation of the field as an evidence-based and well-funded healthcare profession, most individuals work part-time music therapy jobs and feel underpaid. Yet, many music therapists have a positive outlook on the field's future. Continued research and advocacy efforts, as well as collaborations with lobbyists, business consultants, and credentialing/licensure experts to develop progressive strategies, will be crucial for global development and sustainability of the field. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Amusia and protolanguage impairments in schizophrenia
Kantrowitz, J. T.; Scaramello, N.; Jakubovitz, A.; Lehrfeld, J. M.; Laukka, P.; Elfenbein, H. A.; Silipo, G.; Javitt, D. C.
2017-01-01
Background Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Method Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Results Highly significant deficits were seen between patients and controls across auditory tasks (p<0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. Discussion This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia. PMID:25066878
Amusia and protolanguage impairments in schizophrenia.
Kantrowitz, J T; Scaramello, N; Jakubovitz, A; Lehrfeld, J M; Laukka, P; Elfenbein, H A; Silipo, G; Javitt, D C
2014-10-01
Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Highly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
The effects of an early intervention music curriculum on prereading/writing.
Register, D
2001-01-01
This study evaluated the effects of music sessions using a curriculum designed to enhance the prereading and writing skills of 25 children aged 4 to 5 years who were enrolled in Early Intervention and Exceptional Student Education programs. This study was a replication of the work of Standley and Hughes (1997) and utilized a larger sample size (n = 50) in order to evaluate the efficacy of a music curriculum designed specifically to teach prereading and writing skills versus one that focuses on all developmental areas. Both the experimental (n = 25) and control (n = 25) groups received two 30-minute sessions each week for an entire school year for a minimum of 60 sessions per group. The differentiating factors between the two groups were the structure and components of the musical activities. The fall sessions for the experimental group were focused primarily on writing skills while the spring sessions taught reading/book concepts. Music sessions for the control group were based purely on the thematic material, as determined by the classroom teacher with purposeful exclusion of all preliteracy concepts. All participants were pretested at the beginning of the school year and posttested before the school year ended. Overall, results demonstrated that music sessions significantly enhanced both groups' abilities to learn prewriting and print concepts. However, the experimental group showed significantly higher results on the logo identification posttest and the word recognition test. Implications for curriculum design and academic and social applications of music in Early Intervention programs are discussed.
ERIC Educational Resources Information Center
Elyagutu, Dilek Cantekin; Hazar, Muhsin
2017-01-01
In this research, Movement Notation (Laban) and Traditional Method in Folk dance Teaching were compared in terms of learning success. Movement notation group (n = 14) and Traditional group (n = 14) consisting of students from the S.U. State Conservatory Turkish Folk Dance Department were formed. During the 14-week-long study, the symbols of the…
ERIC Educational Resources Information Center
Hewitt, Dave
2014-01-01
This article analyzes the use of the software Grid Algebra with a mixed ability class of 21 nine-to-ten-year-old students who worked with complex formal notation involving all four arithmetic operations. Unlike many other models to support learning, Grid Algebra has formal notation ever present and allows students to "look through" that…
ERIC Educational Resources Information Center
Hochgesang, Julie A.
2013-01-01
In my dissertation, I examine four notation systems used to represent hand configurations in child acquisition of signed languages. Linguists have long recognized the descriptive limitations of Stokoe notation, currently the most commonly used system for phonetic or phonological transcription, but continue using it because of its widespread…
The British Sign Language Variant of Stokoe Notation: Report on a Type-Design Project.
ERIC Educational Resources Information Center
Thoutenhoofd, Ernst
2003-01-01
Explores the outcome of a publicly-funded research project titled "Redesign of the British Sign Language (BSL) Notation System with a New Font for Use in ICT." The aim of the project was to redesign the British Sign Language variant of Stokoe notation for practical use in information technology systems and software, such as lexical…
Community of Interest Engagement Process Plan
2012-02-09
and input from Subject Matter Experts (SMEs), as shown in the far left of Figure 2. The team may prepare a Business Process Model Notation ( BPMN ) 22...22 Business Process Modeling Notation ( BPMN ) is a method of illustrating business processes in the form of a...Community of Interest Engagement Plan Joint Planning and Development Office 21 10. Acronyms BPMN Business Process Modeling Notation COI
The Cooperate Assistive Teamwork Environment for Software Description Languages.
Groenda, Henning; Seifermann, Stephan; Müller, Karin; Jaworek, Gerhard
2015-01-01
Versatile description languages such as the Unified Modeling Language (UML) are commonly used in software engineering across different application domains in theory and practice. They often use graphical notations and leverage visual memory for expressing complex relations. Those notations are hard to access for people with visual impairment and impede their smooth inclusion in an engineering team. Existing approaches provide textual notations but require manual synchronization between the notations. This paper presents requirements for an accessible and language-aware team work environment as well as our plan for the assistive implementation of Cooperate. An industrial software engineering team consisting of people with and without visual impairment will evaluate the implementation.
BPMN, Toolsets, and Methodology: A Case Study of Business Process Management in Higher Education
NASA Astrophysics Data System (ADS)
Barn, Balbir S.; Oussena, Samia
This chapter describes ongoing action research which is exploring the use of BPMN and a specific toolset - Intalio Designer to capture the “as is” essential process model of part of an overarching large business process within higher education. The chapter contends that understanding the efficacy of the BPMN notation and the notational elements to use is not enough. Instead, the effectiveness of a notation is determined by the notation, the toolset that is being used, and methodological consideration. The chapter presents some of the challenges that are faced in attempting to develop computation independent models in BPMN using toolsets such as Intalio Designer™.
Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O
2017-08-01
Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
Measuring the Benefits from Research. Policy Resource
ERIC Educational Resources Information Center
Grant, Jonathan
2006-01-01
To date, much thinking about research measurement and evaluation has been concentrated in the biomedical and health sciences. However, there is increasing recognition that funders of public research--in areas ranging from music to microbiology or from economics to engineering--need to justify their expenditure and demonstrate added value to the…
Adjusting the Focus: Padua Hills Theatre and Latino History.
ERIC Educational Resources Information Center
Garcia, Matt
1996-01-01
Reveals an interesting and overlooked chapter in Hispanic cultural history. The Claremont, California, Padua Hills Theater presented Spanish-language, Mexican-theme musicals to a mostly white audience from 1931 to 1974. Although it presented romantic, and occasionally stereotypical views of Mexican American life, the theater deserves recognition.…
Music and Speech Perception in Children Using Sung Speech
Nie, Yingjiu; Galvin, John J.; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie
2018-01-01
This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners. PMID:29609496
Music and Speech Perception in Children Using Sung Speech.
Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie
2018-01-01
This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.
Prosodic persistence in music performance and speech production
NASA Astrophysics Data System (ADS)
Jungers, Melissa K.; Palmer, Caroline; Speer, Shari R.
2002-05-01
Does the rate of melodies that listeners hear affect the rate of their performed melodies? Skilled adult pianists performed two short melodies as a measure of their preferred performance rate. Next they heard, on each trial, a computer-generated performance of a prime melody at a slow or fast rate (600 or 300 ms per quarter-note beat). Following each prime melody, the pianists performed a target melody from notation. The prime and target melodies were matched for meter and length. The rate of pianists' target melody performances was slower for performances that followed a slow prime than a fast prime, indicating that pianists' performances were influenced by the rate of the prime melody. Performance duration was predicted by a model that includes prime and preferred durations. Findings from an analogous speech production experiment show that a similar model predicts speakers' sentence rate from preferred and prime sentence rates. [Work supported by NIMH Grant 45764 and the Center for Cognitive Science.
Multistate Memristive Tantalum Oxide Devices for Ternary Arithmetic
Kim, Wonjoo; Chattopadhyay, Anupam; Siemon, Anne; Linn, Eike; Waser, Rainer; Rana, Vikas
2016-01-01
Redox-based resistive switching random access memory (ReRAM) offers excellent properties to implement future non-volatile memory arrays. Recently, the capability of two-state ReRAMs to implement Boolean logic functionality gained wide interest. Here, we report on seven-states Tantalum Oxide Devices, which enable the realization of an intrinsic modular arithmetic using a ternary number system. Modular arithmetic, a fundamental system for operating on numbers within the limit of a modulus, is known to mathematicians since the days of Euclid and finds applications in diverse areas ranging from e-commerce to musical notations. We demonstrate that multistate devices not only reduce the storage area consumption drastically, but also enable novel in-memory operations, such as computing using high-radix number systems, which could not be implemented using two-state devices. The use of high radix number system reduces the computational complexity by reducing the number of needed digits. Thus the number of calculation operations in an addition and the number of logic devices can be reduced. PMID:27834352
Multistate Memristive Tantalum Oxide Devices for Ternary Arithmetic.
Kim, Wonjoo; Chattopadhyay, Anupam; Siemon, Anne; Linn, Eike; Waser, Rainer; Rana, Vikas
2016-11-11
Redox-based resistive switching random access memory (ReRAM) offers excellent properties to implement future non-volatile memory arrays. Recently, the capability of two-state ReRAMs to implement Boolean logic functionality gained wide interest. Here, we report on seven-states Tantalum Oxide Devices, which enable the realization of an intrinsic modular arithmetic using a ternary number system. Modular arithmetic, a fundamental system for operating on numbers within the limit of a modulus, is known to mathematicians since the days of Euclid and finds applications in diverse areas ranging from e-commerce to musical notations. We demonstrate that multistate devices not only reduce the storage area consumption drastically, but also enable novel in-memory operations, such as computing using high-radix number systems, which could not be implemented using two-state devices. The use of high radix number system reduces the computational complexity by reducing the number of needed digits. Thus the number of calculation operations in an addition and the number of logic devices can be reduced.
Multistate Memristive Tantalum Oxide Devices for Ternary Arithmetic
NASA Astrophysics Data System (ADS)
Kim, Wonjoo; Chattopadhyay, Anupam; Siemon, Anne; Linn, Eike; Waser, Rainer; Rana, Vikas
2016-11-01
Redox-based resistive switching random access memory (ReRAM) offers excellent properties to implement future non-volatile memory arrays. Recently, the capability of two-state ReRAMs to implement Boolean logic functionality gained wide interest. Here, we report on seven-states Tantalum Oxide Devices, which enable the realization of an intrinsic modular arithmetic using a ternary number system. Modular arithmetic, a fundamental system for operating on numbers within the limit of a modulus, is known to mathematicians since the days of Euclid and finds applications in diverse areas ranging from e-commerce to musical notations. We demonstrate that multistate devices not only reduce the storage area consumption drastically, but also enable novel in-memory operations, such as computing using high-radix number systems, which could not be implemented using two-state devices. The use of high radix number system reduces the computational complexity by reducing the number of needed digits. Thus the number of calculation operations in an addition and the number of logic devices can be reduced.
People, clothing, music, and arousal as contextual retrieval cues in verbal memory.
Standing, Lionel G; Bobbitt, Kristin E; Boisvert, Kathryn L; Dayholos, Kathy N; Gagnon, Anne M
2008-10-01
Four experiments (N = 164) on context-dependent memory were performed to explore the effects on verbal memory of incidental cues during the test session which replicated specific features of the learning session. These features involved (1) bystanders, (2) the clothing of the experimenter, (3) background music, and (4) the arousal level of the subject. Social contextual cues (bystanders or experimenter clothing) improved verbal recall or recognition. However, recall decreased when the contextual cue was a different stimulus taken from the same conceptual category (piano music by Chopin) that was heard during learning. Memory was unaffected by congruent internal cues, produced by the same physiological arousal level (low, moderate, or high heart rate) during the learning and test sessions. However, recall increased with the level of arousal across the three congruent conditions. The results emphasize the effectiveness as retrieval cues of stimuli which are socially salient, concrete, and external.
Temporal grouping effects in musical short-term memory.
Gorin, Simon; Mengal, Pierre; Majerus, Steve
2018-07-01
Recent theoretical accounts of verbal and visuo-spatial short-term memory (STM) have proposed the existence of domain-general mechanisms for the maintenance of serial order information. These accounts are based on the observation of similar behavioural effects across several modalities, such as temporal grouping effects. Across two experiments, the present study aimed at extending these findings, by exploring a STM modality that has received little interest so far, STM for musical information. Given its inherent rhythmic, temporal and serial organisation, the musical domain is of interest for investigating serial order STM processes such as temporal grouping. In Experiment 1, the data did not allow to determine the presence or the absence of temporal grouping effects. In Experiment 2, we observed that temporal grouping of tone sequences during encoding improves short-term recognition for serially presented probe tones. Furthermore, the serial position curves included micro-primacy and micro-recency effects, which are the hallmark characteristic of temporal grouping. Our results suggest that the encoding of serial order information in musical STM may be supported by temporal positional coding mechanisms similar to those reported in the verbal domain.
Doğramac, Sera N; Watsford, Mark L; Murphy, Aron J
2011-03-01
Subjective notational analysis can be used to track players and analyse movement patterns during match-play of team sports such as futsal. The purpose of this study was to establish the validity and reliability of the Event Recorder for subjective notational analysis. A course was designed, replicating ten minutes of futsal match-play movement patterns, where ten participants undertook the course. The course allowed a comparison of data derived from subjective notational analysis, to the known distances of the course, and to GPS data. The study analysed six locomotor activity categories, focusing on total distance covered, total duration of activities and total frequency of activities. The values between the known measurements and the Event Recorder were similar, whereas the majority of significant differences were found between the Event Recorder and GPS values. The reliability of subjective notational analysis was established with all ten participants being analysed on two occasions, as well as analysing five random futsal players twice during match-play. Subjective notational analysis is a valid and reliable method of tracking player movements, and may be a preferred and more effective method than GPS, particularly for indoor sports such as futsal, and field sports where short distances and changes in direction are observed.
2014-01-01
Background Ambiscript is a graphically-designed nucleic acid notation that uses symbol symmetries to support sequence complementation, highlight biologically-relevant palindromes, and facilitate the analysis of consensus sequences. Although the original Ambiscript notation was designed to easily represent consensus sequences for multiple sequence alignments, the notation’s black-on-white ambiguity characters are unable to reflect the statistical distribution of nucleotides found at each position. We now propose a color-augmented ambigraphic notation to encode the frequency of positional polymorphisms in these consensus sequences. Results We have implemented this color-coding approach by creating an Adobe Flash® application ( http://www.ambiscript.org) that shades and colors modified Ambiscript characters according to the prevalence of the encoded nucleotide at each position in the alignment. The resulting graphic helps viewers perceive biologically-relevant patterns in multiple sequence alignments by uniquely combining color, shading, and character symmetries to highlight palindromes and inverted repeats in conserved DNA motifs. Conclusion Juxtaposing an intuitive color scheme over the deliberate character symmetries of an ambigraphic nucleic acid notation yields a highly-functional nucleic acid notation that maximizes information content and successfully embodies key principles of graphic excellence put forth by the statistician and graphic design theorist, Edward Tufte. PMID:24447494
A systematic investigation of the link between rational number processing and algebra ability.
Hurst, Michelle; Cordes, Sara
2018-02-01
Recent research suggests that fraction understanding is predictive of algebra ability; however, the relative contributions of various aspects of rational number knowledge are unclear. Furthermore, whether this relationship is notation-dependent or rather relies upon a general understanding of rational numbers (independent of notation) is an open question. In this study, college students completed a rational number magnitude task, procedural arithmetic tasks in fraction and decimal notation, and an algebra assessment. Using these tasks, we measured three different aspects of rational number ability in both fraction and decimal notation: (1) acuity of underlying magnitude representations, (2) fluency with which symbols are mapped to the underlying magnitudes, and (3) fluency with arithmetic procedures. Analyses reveal that when looking at the measures of magnitude understanding, the relationship between adults' rational number magnitude performance and algebra ability is dependent upon notation. However, once performance on arithmetic measures is included in the relationship, individual measures of magnitude understanding are no longer unique predictors of algebra performance. Furthermore, when including all measures simultaneously, results revealed that arithmetic fluency in both fraction and decimal notation each uniquely predicted algebra ability. Findings are the first to demonstrate a relationship between rational number understanding and algebra ability in adults while providing a clearer picture of the nature of this relationship. © 2017 The British Psychological Society.
Dynamics of brain activity underlying working memory for music in a naturalistic condition.
Burunat, Iballa; Alluri, Vinoo; Toiviainen, Petri; Numminen, Jussi; Brattico, Elvira
2014-08-01
We aimed at determining the functional neuroanatomy of working memory (WM) recognition of musical motifs that occurs while listening to music by adopting a non-standard procedure. Western tonal music provides naturally occurring repetition and variation of motifs. These serve as WM triggers, thus allowing us to study the phenomenon of motif tracking within real music. Adopting a modern tango as stimulus, a behavioural test helped to identify the stimulus motifs and build a time-course regressor of WM neural responses. This regressor was then correlated with the participants' (musicians') functional magnetic resonance imaging (fMRI) signal obtained during a continuous listening condition. In order to fine-tune the identification of WM processes in the brain, the variance accounted for by the sensory processing of a set of the stimulus' acoustic features was pruned from participants' neurovascular responses to music. Motivic repetitions activated prefrontal and motor cortical areas, basal ganglia, medial temporal lobe (MTL) structures, and cerebellum. The findings suggest that WM processing of motifs while listening to music emerges from the integration of neural activity distributed over cognitive, motor and limbic subsystems. The recruitment of the hippocampus stands as a novel finding in auditory WM. Effective connectivity and agglomerative hierarchical clustering analyses indicate that the hippocampal connectivity is modulated by motif repetitions, showing strong connections with WM-relevant areas (dorsolateral prefrontal cortex - dlPFC, supplementary motor area - SMA, and cerebellum), which supports the role of the hippocampus in the encoding of the musical motifs in WM, and may evidence long-term memory (LTM) formation, enabled by the use of a realistic listening condition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of musical expertise on oscillatory brain activity in response to emotional sounds.
Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L
2017-08-01
Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gap-minimal systems of notations and the constructible hierarchy
NASA Technical Reports Server (NTRS)
Lucian, M. L.
1972-01-01
If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.
ProForma: A Standard Proteoform Notation
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeDuc, Richard D.; Schwämmle, Veit; Shortreed, Michael R.
The Consortium for Top-Down Proteomics (CTDP) proposes a standardized notation, ProForma, for writing the sequence of fully characterized proteoforms. ProForma provides a means to communicate any proteoform by writing the amino acid sequence using standard one-letter notation and specifying modifications or unidentified mass shifts within brackets following certain amino acids. The notation is unambiguous, human readable, and can easily be parsed and written by bioinformatic tools. This system uses seven rules and supports a wide range of possible use cases, ensuring compatibility and reproducibility of proteoform annotations. Standardizing proteoform sequences will simplify storage, comparison, and reanalysis of proteomic studies, andmore » the Consortium welcomes input and contributions from the research community on the continued design and maintenance of this standard.« less
Musicians' working memory for tones, words, and pseudowords.
Benassi-Werke, Mariana E; Queiroz, Marcelo; Araújo, Rúben S; Bueno, Orlando F A; Oliveira, Maria Gabriela M
2012-01-01
Studies investigating factors that influence tone recognition generally use recognition tests, whereas the majority of the studies on verbal material use self-generated responses in the form of serial recall tests. In the present study we intended to investigate whether tonal and verbal materials share the same cognitive mechanisms, by presenting an experimental instrument that evaluates short-term and working memories for tones, using self-generated sung responses that may be compared to verbal tests. This paradigm was designed according to the same structure of the forward and backward digit span tests, but using digits, pseudowords, and tones as stimuli. The profile of amateur singers and professional singers in these tests was compared in forward and backward digit, pseudoword, tone, and contour spans. In addition, an absolute pitch experimental group was included, in order to observe the possible use of verbal labels in tone memorization tasks. In general, we observed that musical schooling has a slight positive influence on the recall of tones, as opposed to verbal material, which is not influenced by musical schooling. Furthermore, the ability to reproduce melodic contours (up and down patterns) is generally higher than the ability to reproduce exact tone sequences. However, backward spans were lower than forward spans for all stimuli (digits, pseudowords, tones, contour). Curiously, backward spans were disproportionately lower for tones than for verbal material-that is, the requirement to recall sequences in backward rather than forward order seems to differentially affect tonal stimuli. This difference does not vary according to musical expertise.
Alteration of complex negative emotions induced by music in euthymic patients with bipolar disorder.
Choppin, Sabine; Trost, Wiebke; Dondaine, Thibaut; Millet, Bruno; Drapier, Dominique; Vérin, Marc; Robert, Gabriel; Grandjean, Didier
2016-02-01
Research has shown bipolar disorder to be characterized by dysregulation of emotion processing, including biases in facial expression recognition that is most prevalent during depressive and manic states. Very few studies have examined induced emotions when patients are in a euthymic phase, and there has been no research on complex emotions. We therefore set out to test emotional hyperreactivity in response to musical excerpts inducing complex emotions in bipolar disorder during euthymia. We recruited 21 patients with bipolar disorder (BD) in a euthymic phase and 21 matched healthy controls. Participants first rated their emotional reactivity on two validated self-report scales (ERS and MAThyS). They then rated their music-induced emotions on nine continuous scales. The targeted emotions were wonder, power, melancholy and tension. We used a specific generalized linear mixed model to analyze the behavioral data. We found that participants in the euthymic bipolar group experienced more intense complex negative emotions than controls when the musical excerpts induced wonder. Moreover, patients exhibited greater emotional reactivity in daily life (ERS). Finally, a greater experience of tension while listening to positive music seemed to be mediated by greater emotional reactivity and a deficit in executive functions. The heterogeneity of the BD group in terms of clinical characteristics may have influenced the results. Euthymic patients with bipolar disorder exhibit more complex negative emotions than controls in response to positive music. Copyright © 2015 Elsevier B.V. All rights reserved.
Know thy sound: perceiving self and others in musical contexts.
Sevdalis, Vassilis; Keller, Peter E
2014-10-01
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory-motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual-motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice. Copyright © 2014 Elsevier B.V. All rights reserved.
Songbirds use spectral shape, not pitch, for sound pattern recognition
Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.
2016-01-01
Humans easily recognize “transposed” musical melodies shifted up or down in log frequency. Surprisingly, songbirds seem to lack this capacity, although they can learn to recognize human melodies and use complex acoustic sequences for communication. Decades of research have led to the widespread belief that songbirds, unlike humans, are strongly biased to use absolute pitch (AP) in melody recognition. This work relies almost exclusively on acoustically simple stimuli that may belie sensitivities to more complex spectral features. Here, we investigate melody recognition in a species of songbird, the European Starling (Sturnus vulgaris), using tone sequences that vary in both pitch and timbre. We find that small manipulations altering either pitch or timbre independently can drive melody recognition to chance, suggesting that both percepts are poor descriptors of the perceptual cues used by birds for this task. Instead we show that melody recognition can generalize even in the absence of pitch, as long as the spectral shapes of the constituent tones are preserved. These results challenge conventional views regarding the use of pitch cues in nonhuman auditory sequence recognition. PMID:26811447
Children's note taking as a mnemonic tool.
Eskritt, Michelle; McLeod, Kellie
2008-09-01
When given the opportunity to take notes in memory tasks, children sometimes make notes that are not useful. The current study examined the role that task constraints might play in the production of nonmnemonic notes. In Experiment 1, children played one easy and one difficult memory game twice, once with the opportunity to make notes and once without that opportunity. More children produced functional notations for the easier task than for the more difficult task, and their notations were beneficial to memory performance. Experiment 2 found that the majority of children who at first made nonmnemonic notations were able to produce functional notations with minimal training, and there was no significant difference in notation quality or memory performance between spontaneous and trained note takers. Experiment 3 revealed that the majority of children could transfer their training to a novel task. The results suggest that children's production of nonmnemonic notes may be due in part to a lack of knowledge regarding what task information is important to represent or how to represent it in their notes rather than to an inability to make functional notes in general.
The Effect of Pattern Recognition and Tonal Predictability on Sight-Singing Ability
ERIC Educational Resources Information Center
Fine, Philip; Berry, Anna; Rosner, Burton
2006-01-01
This study investigated the role of concurrent musical parts in pitching ability in sight-singing, concentrating on the effects of melodic and harmonic coherence. Twenty-two experienced singers sang their part twice in each of four novel chorales. The chorales contained either original or altered melody and original (tonal) or altered (atonal)…
None
2018-05-18
Celebration of CERN's 25th birthday with a speech by L. Van Hove and J.B. Adams, musical interludes by Ms. Mey and her colleagues (starting with Beethoven). The general managers then proceed with the presentation of souvenirs to members of the personnel who have 25 years of service in the organization. A gesture of recognition is also given to Zwerner.
A cancelable biometric scheme based on multi-lead ECGs.
Peng-Tzu Chen; Shun-Chi Wu; Jui-Hsuan Hsieh
2017-07-01
Biometric technologies offer great advantages over other recognition methods, but there are concerns that they may compromise the privacy of individuals. In this paper, an electrocardiogram (ECG)-based cancelable biometric scheme is proposed to relieve such concerns. In this scheme, distinct biometric templates for a given beat bundle are constructed via "subspace collapsing." To determine the identity of any unknown beat bundle, the multiple signal classification (MUSIC) algorithm, incorporating a "suppression and poll" strategy, is adopted. Unlike the existing cancelable biometric schemes, knowledge of the distortion transform is not required for recognition. Experiments with real ECGs from 285 subjects are presented to illustrate the efficacy of the proposed scheme. The best recognition rate of 97.58 % was achieved under the test condition N train = 10 and N test = 10.
Musical expertise has minimal impact on dual task performance.
Cocchini, Gianna; Filardi, Maria Serena; Crhonkova, Marcela; Halpern, Andrea R
2017-05-01
Studies investigating effect of practice on dual task performance have yielded conflicting findings, thus supporting different theoretical accounts about the organisation of attentional resources when tasks are performed simultaneously. Because practice has been proven to reduce the demand of attention for the trained task, the impact of long-lasting training on one task is an ideal way to better understand the mechanisms underlying dual task decline in performance. Our study compared performance during dual task execution in expert musicians compared to controls with little if any musical experience. Participants performed a music recognition task and a visuo-spatial task separately (single task) or simultaneously (dual task). Both groups showed a significant but similar performance decline during dual tasks. In addition, the two groups showed a similar decline of dual task performance during encoding and retrieval of the musical information, mainly attributed to a decline in sensitivity. Our results suggest that attention during dual tasks is similarly distributed by expert and non-experts. These findings are in line with previous studies showing a lack of sensitivity to difficulty and lack of practice effect during dual tasks, supporting the idea that different tasks may rely on different and not-sharable attentional resources.
Sound Richness of Music Might Be Mediated by Color Perception: A PET Study.
Satoh, Masayuki; Nagata, Ken; Tomimoto, Hidekazu
2015-01-01
We investigated the role of the fusiform cortex in music processing with the use of PET, focusing on the perception of sound richness. Musically naïve subjects listened to familiar melodies with three kinds of accompaniments: (i) an accompaniment composed of only three basic chords (chord condition), (ii) a simple accompaniment typically used in traditional music text books in elementary school (simple condition), and (iii) an accompaniment with rich and flowery sounds composed by a professional composer (complex condition). Using a PET subtraction technique, we studied changes in regional cerebral blood flow (rCBF) in simple minus chord, complex minus simple, and complex minus chord conditions. The simple minus chord, complex minus simple, and complex minus chord conditions regularly showed increases in rCBF at the posterior portion of the inferior temporal gyrus, including the LOC and fusiform gyrus. We may conclude that certain association cortices such as the LOC and the fusiform cortex may represent centers of multisensory integration, with foreground and background segregation occurring at the LOC level and the recognition of richness and floweriness of stimuli occurring in the fusiform cortex, both in terms of vision and audition.
Developing a benchmark for emotional analysis of music.
Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.
The IMUTUS interactive music tuition system
NASA Astrophysics Data System (ADS)
Tambouratzis, George; Bakamidis, Stelios; Dologlou, Ioannis; Carayannis, George; Dendrinos, Markos
2002-05-01
This presentation focuses on the IMUTUS project, which concerns the creation of an innovative method for training users on traditional musical instruments with no MIDI (Musical Instrument Digital Interface) output. The entities collaborating in IMUTUS are ILSP (coordinator), EXODUS, SYSTEMA, DSI, SMF, GRAME, and KTH. The IMUTUS effectiveness is enhanced via an advanced user interface incorporating multimedia techniques. Internet plays a pivotal role during training, the student receiving guidance over the net from a specially created teacher group. Interactiveness is emphasized via automatic-scoring tools, which provide fast yet accurate feedback to the user, while virtual reality methods assist the student in perfecting his technique. IMUTUS incorporates specialized recognition technology for the transformation of acoustic signals and music scores to MIDI format and incorporation in the training process. This process is enhanced by periodically enriching the score database, while customization to each user's requirements is supported. This work is partially supported by European Community under the Information Society Technology (IST) RTD programme. The authors are solely responsible for the content of this communication. It does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of data appearing therein.
Development of a Mandarin-English Bilingual Speech Recognition System for Real World Music Retrieval
NASA Astrophysics Data System (ADS)
Zhang, Qingqing; Pan, Jielin; Lin, Yang; Shao, Jian; Yan, Yonghong
In recent decades, there has been a great deal of research into the problem of bilingual speech recognition-to develop a recognizer that can handle inter- and intra-sentential language switching between two languages. This paper presents our recent work on the development of a grammar-constrained, Mandarin-English bilingual Speech Recognition System (MESRS) for real world music retrieval. Two of the main difficult issues in handling the bilingual speech recognition systems for real world applications are tackled in this paper. One is to balance the performance and the complexity of the bilingual speech recognition system; the other is to effectively deal with the matrix language accents in embedded language**. In order to process the intra-sentential language switching and reduce the amount of data required to robustly estimate statistical models, a compact single set of bilingual acoustic models derived by phone set merging and clustering is developed instead of using two separate monolingual models for each language. In our study, a novel Two-pass phone clustering method based on Confusion Matrix (TCM) is presented and compared with the log-likelihood measure method. Experiments testify that TCM can achieve better performance. Since potential system users' native language is Mandarin which is regarded as a matrix language in our application, their pronunciations of English as the embedded language usually contain Mandarin accents. In order to deal with the matrix language accents in embedded language, different non-native adaptation approaches are investigated. Experiments show that model retraining method outperforms the other common adaptation methods such as Maximum A Posteriori (MAP). With the effective incorporation of approaches on phone clustering and non-native adaptation, the Phrase Error Rate (PER) of MESRS for English utterances was reduced by 24.47% relatively compared to the baseline monolingual English system while the PER on Mandarin utterances was comparable to that of the baseline monolingual Mandarin system. The performance for bilingual utterances achieved 22.37% relative PER reduction.
Mental Imagery for Musical Changes in Loudness
Bailes, Freya; Bishop, Laura; Stevens, Catherine J.; Dean, Roger T.
2012-01-01
Musicians imagine music during mental rehearsal, when reading from a score, and while composing. An important characteristic of music is its temporality. Among the parameters that vary through time is sound intensity, perceived as patterns of loudness. Studies of mental imagery for melodies (i.e., pitch and rhythm) show interference from concurrent musical pitch and verbal tasks, but how we represent musical changes in loudness is unclear. Theories suggest that our perceptions of loudness change relate to our perceptions of force or effort, implying a motor representation. An experiment was conducted to investigate the modalities that contribute to imagery for loudness change. Musicians performed a within-subjects loudness change recall task, comprising 48 trials. First, participants heard a musical scale played with varying patterns of loudness, which they were asked to remember. There followed an empty interval of 8 s (nil distractor control), or the presentation of a series of four sine tones, or four visual letters or three conductor gestures, also to be remembered. Participants then saw an unfolding score of the notes of the scale, during which they were to imagine the corresponding scale in their mind while adjusting a slider to indicate the imagined changes in loudness. Finally, participants performed a recognition task of the tone, letter, or gesture sequence. Based on the motor hypothesis, we predicted that observing and remembering conductor gestures would impair loudness change scale recall, while observing and remembering tone or letter string stimuli would not. Results support this prediction, with loudness change recalled less accurately in the gestures condition than in the control condition. An effect of musical training suggests that auditory and motor imagery ability may be closely related to domain expertise. PMID:23227014
Post-stroke acquired amusia: A comparison between right- and left-brain hemispheric damages.
Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H
2017-01-01
Although extensive research has been published about the emotional consequences of stroke, most studies have focused on emotional words, speech prosody, voices, or facial expressions. The emotional processing of musical excerpts following stroke has been relatively unexplored. The present study was conducted to investigate the effects of chronic stroke on the recognition of basic emotions in music. Seventy persons, including 25 normal controls (NC), 25 persons with right brain damage (RBD) from stroke, and 20 persons with left brain damage (LBD) from stroke between the ages of 31-71 years were studied. The Musical Emotional Bursts (MEB) test, which consists of a set of short musical pieces expressing basic emotional states (happiness, sadness, and fear) and neutrality, was used to test musical emotional perception. Both stroke groups were significantly poorer than normal controls for the MEB total score and its subtests (p < 0.001). The RBD group was significantly less able than the LBD group to recognize sadness (p = 0.047) and neutrality (p = 0.015). Negative correlations were found between age and MEB scores for all groups, particularly the NC and RBD groups. Our findings indicated that stroke affecting the auditory cerebrum can cause acquired amusia with greater severity in RBD than LBD. These results supported the "valence hypothesis" of right hemisphere dominance in processing negative emotions.
[Cognitive rehabilitation of amusia].
Weill-Chounlamountry, A; Soyez-Gayout, L; Tessier, C; Pradat-Diehl, P
2008-06-01
The cognitive model of music processing has a modular architecture with two main pathways (a melody pathway and a time pathway) for processing the musical "message" and thus enabling music recognition. It also features a music-specific module for tonal encoding of pitch which stands apart from all other known cognitive systems (including language processing). To the best of our knowledge, rehabilitation therapy for amusia has not yet been reported. We developed a therapeutic method (inspired by work on word deafness) in order to determine whether specific rehabilitation based on melody discrimination could prompt the regression of amusia. We report the case of a patient having developed receptive, acquired amusia four years previously. His tone deafness disorder was assessed using the Montreal Battery of Evaluation of Amusia (MBEA), which revealed impairment of the melody pathway but no deficiency in the time pathway. A computer-assisted rehabilitation method was implemented; it used melody discrimination tasks and an errorless learning paradigm with progressively fading visual cues. After therapy, we noted an improvement in the overall MBEA score and its component subscores which could not be explained by spontaneous recovery (in view of the number of years since the neurological accident). The improvement was maintained at seven months post-therapy. Although post-therapy improvement in daily life was not systematically assessed, the patient started listening to his favourite music again. Specific amusia therapy has shown efficacy.
Thompson, Grace A
2018-01-13
Parents of children on the autism spectrum have consistently reported feeling uncertain in their parenting role, and desire more practical advice from service providers about how to support their child in the home. There is growing recognition of the need for interventions to provide support to the family as well as fostering child development outcomes. This study explores mothers' follow-up perspectives of family-centered music therapy (FCMT) four years after participating in a 16-week home-based program, and therefore provides a unique long-term viewpoint on FCMT outcomes. Eight mothers who previously participated in FCMT sessions with their young children on the autism spectrum were interviewed to explore their perception of any long-term outcomes. A descriptive phenomenological analysis revealed five global themes, including: improvement in mothers' confidence to engage their child; rare opportunities for mutual mother-child enjoyment; improved child social communication and quality of life; mothers' new understanding of the child's interests and strengths; and more opportunities for continuing the child's interest in music. Mothers perceived long-term benefits to social relationships within the family, leading to perceived enrichment in child and family quality of life following music therapy sessions. © American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Effects of timbre and tempo change on memory for music.
Halpern, Andrea R; Müllensiefen, Daniel
2008-09-01
We investigated the effects of different encoding tasks and of manipulations of two supposedly surface parameters of music on implicit and explicit memory for tunes. In two experiments, participants were first asked to either categorize instrument or judge familiarity of 40 unfamiliar short tunes. Subsequently, participants were asked to give explicit and implicit memory ratings for a list of 80 tunes, which included 40 previously heard. Half of the 40 previously heard tunes differed in timbre (Experiment 1) or tempo (Experiment 2) in comparison with the first exposure. A third experiment compared similarity ratings of the tunes that varied in timbre or tempo. Analysis of variance (ANOVA) results suggest first that the encoding task made no difference for either memory mode. Secondly, timbre and tempo change both impaired explicit memory, whereas tempo change additionally made implicit tune recognition worse. Results are discussed in the context of implicit memory for nonsemantic materials and the possible differences in timbre and tempo in musical representations.
Integrated mobility measurement and notation system
NASA Technical Reports Server (NTRS)
Roebuck, J. A., Jr.
1967-01-01
System for description of movements and positions facilitates design of space suits with more mobility. This measurement and notation system gives concise and unequivocal descriptions, compatible with engineering analysis and applicable to specific needs.
Layered Systems Engineering Engines
NASA Technical Reports Server (NTRS)
Breidenthal, Julian C.; Overman, Marvin J.
2009-01-01
A notation is described for depicting the relationships between multiple, contemporaneous systems engineering efforts undertaken within a multi-layer system-of-systems hierarchy. We combined the concepts of remoteness of activity from the end customer, depiction of activity on a timeline, and data flow to create a new kind of diagram which we call a "Layered Vee Diagram." This notation is an advance over previous notations because it is able to be simultaneously precise about activity, level of granularity, product exchanges, and timing; these advances provide systems engineering managers a significantly improved ability to express and understand the relationships between many systems engineering efforts. Using the new notation, we obtain a key insight into the relationship between project duration and the strategy selected for chaining the systems engineering effort between layers, as well as insights into the costs, opportunities, and risks associated with alternate chaining strategies.
A 3D generic inverse dynamic method using wrench notation and quaternion algebra.
Dumas, R; Aissaoui, R; de Guise, J A
2004-06-01
In the literature, conventional 3D inverse dynamic models are limited in three aspects related to inverse dynamic notation, body segment parameters and kinematic formalism. First, conventional notation yields separate computations of the forces and moments with successive coordinate system transformations. Secondly, the way conventional body segment parameters are defined is based on the assumption that the inertia tensor is principal and the centre of mass is located between the proximal and distal ends. Thirdly, the conventional kinematic formalism uses Euler or Cardanic angles that are sequence-dependent and suffer from singularities. In order to overcome these limitations, this paper presents a new generic method for inverse dynamics. This generic method is based on wrench notation for inverse dynamics, a general definition of body segment parameters and quaternion algebra for the kinematic formalism.
Yagahara, Ayako; Tsuji, Shintaro; Hukuda, Akihisa; Nishimoto, Naoki; Ogasawara, Katsuhiko
2016-03-01
The purpose of this study is to investigate the differences in the notation of technical terms and their meanings among three terminologies in Japanese radiology-related societies. The three terminologies compared in this study were "radiological technology terminology" and its supplement published by the Japan Society of Radiological Technology, "medical physics terminology" published by the Japan Society of Medical Physics, and "electric radiation terminology" published by the Japan Radiological Society. Terms were entered into spreadsheets and classified into the following three categories: Japanese notation, English notation, and meanings. In the English notation, terms were matched to character strings in the three terminologies and were extracted and compared. The Japanese notations were compared among three terminologies, and the difference between the meanings of the two terminologies radiological technology terminology and electric radiation terminology were compared. There were a total of 14,982 terms in the three terminologies. In English character strings, 2,735 terms were matched to more than two terminologies, with 801 of these terms matched to all the three terminologies. Of those terms in English character strings matched to three terminologies, 752 matched to Japanese character strings. Of the terms in English character strings matched to two terminologies, 1,240 matched to Japanese character strings. With regard to the meanings category, eight terms had mismatched meanings between the two terminologies. For these terms, there were common concepts between two different meaning terms, and it was considered that the derived concepts were described based on domain.
Perceptual Plasticity for Auditory Object Recognition
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
2017-01-01
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524
Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut
2014-01-01
Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.
A Diagrammatic Language for Biochemical Networks
NASA Astrophysics Data System (ADS)
Maimon, Ron
2002-03-01
I present a diagrammatic language for representing the structure of biochemical networks. The language is designed to represent modular structure in a computational fasion, with composition of reactions replacing functional composition. This notation is used to represent arbitrarily large networks efficiently. The notation finds its most natural use in representing biological interaction networks, but it is a general computing language appropriate to any naturally occuring computation. Unlike lambda-calculus, or text-derived languages, it does not impose a tree-structure on the diagrams, and so is more effective at representing biological fucntion than competing notations.
ANTLR Tree Grammar Generator and Extensions
NASA Technical Reports Server (NTRS)
Craymer, Loring
2005-01-01
A computer program implements two extensions of ANTLR (Another Tool for Language Recognition), which is a set of software tools for translating source codes between different computing languages. ANTLR supports predicated- LL(k) lexer and parser grammars, a notation for annotating parser grammars to direct tree construction, and predicated tree grammars. [ LL(k) signifies left-right, leftmost derivation with k tokens of look-ahead, referring to certain characteristics of a grammar.] One of the extensions is a syntax for tree transformations. The other extension is the generation of tree grammars from annotated parser or input tree grammars. These extensions can simplify the process of generating source-to-source language translators and they make possible an approach, called "polyphase parsing," to translation between computing languages. The typical approach to translator development is to identify high-level semantic constructs such as "expressions," "declarations," and "definitions" as fundamental building blocks in the grammar specification used for language recognition. The polyphase approach is to lump ambiguous syntactic constructs during parsing and then disambiguate the alternatives in subsequent tree transformation passes. Polyphase parsing is believed to be useful for generating efficient recognizers for C++ and other languages that, like C++, have significant ambiguities.
Prickett, C A; Bridges, M S
2000-01-01
This study examined whether a basic song repertoire of folk-type melodies which can be accompanied with principal triads exists in the senior citizen population and compared this repertoire with that of music therapy students. An audiotape of the tunes of 25 standard songs, assumed in previous research to be known by everyone who has finished 6th grade, was played for undergraduate music therapy students (N = 78) and for healthy, active senior citizens (N = 78). None of the senior citizens had received any music therapy services, although many were involved in music activities such as the senior choir at church. Music therapy majors identified significantly more tunes than did the older listeners. Further analysis indicated that there is a good deal of overlap in the repertoires of these two groups. Sixteen tunes were recognized by 80% of therapy students; 10 songs were recognized by 80% of the seniors; the 10 songs identified by these seniors were 10 of the top 11 identified by the college students ("Kumbaya" was not known by the older listeners). Six songs could not be named by 50% of the students; 7 songs could not be named by 50% of the seniors; these two lists contained five common selections ("Oh Shenandoah," "Kookaburra," "Down in the Valley," "Shalom Chaverim," and "Tinga Layo"). Given the growth of the senior segment of the American population, the expansion of services for them, and the popularity of including music activities among these services, it would appear that music therapy students' basic knowledge of a repertoire of songs which are known to older people and which can easily be accompanied with principal triads is adequate, even though the range of songs which could be identified was broad (11-24) and the mean correctly named was merely 70.82% of a set which other investigators, teachers, and professional organizations have said represent a minimal repertoire for all citizens beyond the 6th grade.
Safety Case Notations: Alternatives for the Non-Graphically Inclined?
NASA Technical Reports Server (NTRS)
Holloway, C. M.
2008-01-01
This working paper presents preliminary ideas of five possible text-based notations for representing safety cases, which may be easier for non-graphically inclined people to use and understand than the currently popular graphics-based representations.
A comparison of BPMN 2.0 with other notations for manufacturing processes
NASA Astrophysics Data System (ADS)
García-Domínguez, A.; Marcos, Mariano; Medina, I.
2012-04-01
In order to study their current practices and improve on them, manufacturing firms need to view their processes from several viewpoints at various abstraction levels. Several notations have been developed for this purpose, such as Value Stream Mappings or IDEF models. More recently, the BPMN 2.0 standard from the Object Management Group has been proposed for modeling business processes. A process organizes several activities (manual or automatic) into a single higher-level entity, which can be reused elsewhere in the organization. Its potential for standardizing business interactions is well-known, but there is little work on using BPMN 2.0 to model manufacturing processes. In this work some of the previous notations are outlined and BPMN 2.0 is positioned among them after discussing it in more depth. Some guidelines on using BPMN 2.0 for manufacturing are offered, and its advantages and disadvantages in comparison with the other notations are presented.
Structural Features of Algebraic Quantum Notations
ERIC Educational Resources Information Center
Gire, Elizabeth; Price, Edward
2015-01-01
The formalism of quantum mechanics includes a rich collection of representations for describing quantum systems, including functions, graphs, matrices, histograms of probabilities, and Dirac notation. The varied features of these representations affect how computations are performed. For example, identifying probabilities of measurement outcomes…
40 CFR 60.431 - Definitions and notations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Definitions and notations. 60.431 Section 60.431 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS..., package inserts, book jackets, market circulars, magazine inserts, and shopping news, Newspapers, magazine...
40 CFR 60.431 - Definitions and notations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Definitions and notations. 60.431 Section 60.431 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS..., package inserts, book jackets, market circulars, magazine inserts, and shopping news, Newspapers, magazine...
40 CFR 60.431 - Definitions and notations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Definitions and notations. 60.431 Section 60.431 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS..., package inserts, book jackets, market circulars, magazine inserts, and shopping news, Newspapers, magazine...
40 CFR 60.431 - Definitions and notations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Definitions and notations. 60.431 Section 60.431 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS..., package inserts, book jackets, market circulars, magazine inserts, and shopping news, Newspapers, magazine...
40 CFR 60.431 - Definitions and notations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Definitions and notations. 60.431 Section 60.431 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS..., package inserts, book jackets, market circulars, magazine inserts, and shopping news, Newspapers, magazine...
Assurance Arguments for the Non-Graphically-Inclined: Two Approaches
NASA Technical Reports Server (NTRS)
Heavner, Emily; Holloway, C. Michael
2017-01-01
We introduce and discuss two approaches to presenting assurance arguments. One approach is based on a monograph structure, while the other is based on a tabular structure. In today's research and academic setting, assurance cases often use a graphical notation; however for people who are not graphically inclined, these notations can be difficult to read. This document proposes, outlines, explains, and presents examples of two non-graphical assurance argument notations that may be appropriate for non-graphically-inclined readers and also provide argument writers with freedom to add details and manipulate an argument in multiple ways.
Löwenkamp, Christian; Eloka, Owino; Schiller, Florian; Kao, Chung-Shan; Wu, Chaohua; Gao, Xiaorong; Franz, Volker H.
2016-01-01
The SNARC effect refers to an association of numbers and spatial properties of responses that is commonly thought to be amodal and independent of stimulus notation. We tested for a horizontal SNARC effect using Arabic digits, simple-form Chinese characters and Chinese hand signs in participants from Mainland China. We found a horizontal SNARC effect in all notations. This is the first time that a horizontal SNARC effect has been demonstrated in Chinese characters and Chinese hand signs. We tested for the SNARC effect in two experiments (parity judgement and magnitude judgement). The parity judgement task yielded clear, consistent SNARC effects in all notations, whereas results were more mixed in magnitude judgement. Both Chinese characters and Chinese hand signs are represented non-symbolically for low numbers and symbolically for higher numbers, allowing us to contrast within the same notation the effects of heavily learned non-symbolic vs. symbolic representation on the processing of numbers. In addition to finding a horizontal SNARC effect, we also found a robust numerical distance effect in all notations. This is particularly interesting as it persisted when participants reported using purely visual features to solve the task, thereby suggesting that numbers were processed semantically even when the task could be solved without the semantic information. PMID:27684956
Representation of pitch chroma by multi-peak spectral tuning in human auditory cortex
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Yacoub, Essa; Formisano, Elia
2015-01-01
Musical notes played at octave intervals (i.e., having the same pitch chroma) are perceived as similar. This well-known perceptual phenomenon lays at the foundation of melody recognition and music perception, yet its neural underpinnings remain largely unknown to date. Using fMRI with high sensitivity and spatial resolution, we examined the contribution of multi-peak spectral tuning to the neural representation of pitch chroma in human auditory cortex in two experiments. In experiment 1, our estimation of population spectral tuning curves from the responses to natural sounds confirmed—with new data—our recent results on the existence of cortical ensemble responses finely tuned to multiple frequencies at one octave distance (Moerel et al., 2013). In experiment 2, we fitted a mathematical model consisting of a pitch chroma and height component to explain the measured fMRI responses to piano notes. This analysis revealed that the octave-tuned populations—but not other cortical populations—harbored a neural representation of musical notes according to their pitch chroma. These results indicate that responses of auditory cortical populations selectively tuned to multiple frequencies at one octave distance predict well the perceptual similarity of musical notes with the same chroma, beyond the physical (frequency) distance of notes. PMID:25479020
Representation of pitch chroma by multi-peak spectral tuning in human auditory cortex.
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Yacoub, Essa; Formisano, Elia
2015-02-01
Musical notes played at octave intervals (i.e., having the same pitch chroma) are perceived as similar. This well-known perceptual phenomenon lays at the foundation of melody recognition and music perception, yet its neural underpinnings remain largely unknown to date. Using fMRI with high sensitivity and spatial resolution, we examined the contribution of multi-peak spectral tuning to the neural representation of pitch chroma in human auditory cortex in two experiments. In experiment 1, our estimation of population spectral tuning curves from the responses to natural sounds confirmed--with new data--our recent results on the existence of cortical ensemble responses finely tuned to multiple frequencies at one octave distance (Moerel et al., 2013). In experiment 2, we fitted a mathematical model consisting of a pitch chroma and height component to explain the measured fMRI responses to piano notes. This analysis revealed that the octave-tuned populations-but not other cortical populations-harbored a neural representation of musical notes according to their pitch chroma. These results indicate that responses of auditory cortical populations selectively tuned to multiple frequencies at one octave distance predict well the perceptual similarity of musical notes with the same chroma, beyond the physical (frequency) distance of notes. Copyright © 2014 Elsevier Inc. All rights reserved.
A neurally inspired musical instrument classification system based upon the sound onset.
Newton, Michael J; Smith, Leslie S
2012-06-01
Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.
Kumarapeli, Pushpa; de Lusignan, Simon; Koczan, Phil; Jones, Beryl; Sheeler, Ian
2007-01-01
UK general practice is universally computerised, with computers used in the consulting room at the point of care. Practices use a range of different brands of computer system, which have developed organically to meet the needs of general practitioners and health service managers. Unified Modelling Language (UML) is a standard modelling and specification notation widely used in software engineering. To examine the feasibility of UML notation to compare the impact of different brands of general practice computer system on the clinical consultation. Multi-channel video recordings of simulated consultation sessions were recorded on three different clinical computer systems in common use (EMIS, iSOFT Synergy and IPS Vision). User action recorder software recorded time logs of keyboard and mouse use, and pattern recognition software captured non-verbal communication. The outputs of these were used to create UML class and sequence diagrams for each consultation. We compared 'definition of the presenting problem' and 'prescribing', as these tasks were present in all the consultations analysed. Class diagrams identified the entities involved in the clinical consultation. Sequence diagrams identified common elements of the consultation (such as prescribing) and enabled comparisons to be made between the different brands of computer system. The clinician and computer system interaction varied greatly between the different brands. UML sequence diagrams are useful in identifying common tasks in the clinical consultation, and for contrasting the impact of the different brands of computer system on the clinical consultation. Further research is needed to see if patterns demonstrated in this pilot study are consistently displayed.
A Survey of Logic Formalisms to Support Mishap Analysis
NASA Technical Reports Server (NTRS)
Johnson, Chris; Holloway, C. M.
2003-01-01
Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can validate risk assessments. However, the increasing complexity of many safety critical systems is posing new challenges for mishap analysis. Similarly, the recognition that many failures have complex, systemic causes has helped to widen the scope of many mishap investigations. These two factors have combined to pose new challenges for the analysis of adverse events. A new generation of formal and semi-formal techniques have been proposed to help investigators address these problems. We introduce the term mishap logics to collectively describe these notations that might be applied to support the analysis of mishaps. The proponents of these notations have argued that they can be used to formally prove that certain events created the necessary and sufficient causes for a mishap to occur. These proofs can be used to reduce the bias that is often perceived to effect the interpretation of adverse events. Others have argued that one cannot use logic formalisms to prove causes in the same way that one might prove propositions or theorems. Such mechanisms cannot accurately capture the wealth of inductive, deductive and statistical forms of inference that investigators must use in their analysis of adverse events. This paper provides an overview of these mishap logics. It also identifies several additional classes of logic that might also be used to support mishap analysis.
The Misuse of the Circle Notation to Represent Aromatic Rings.
ERIC Educational Resources Information Center
Belloli, Robert C.
1983-01-01
Discusses the confusion and erroneous conclusions that can result from the overuse and misuse of the circle notation to represent aromaticity in polycylic aromatic hydrocarbons. Includes nature of the problem, textbook treatment, and a possible compromise method of representation. (Author/JN)
Biased emotional recognition in depression: perception of emotions in music by depressed patients.
Punkanen, Marko; Eerola, Tuomas; Erkkilä, Jaakko
2011-04-01
Depression is a highly prevalent mood disorder, that impairs a person's social skills and also their quality of life. Populations affected with depression also suffer from a higher mortality rate. Depression affects person's ability to recognize emotions. We designed a novel experiment to test the hypothesis that depressed patients show a judgment bias towards negative emotions. To investigate how depressed patients differ in their perception of emotions conveyed by musical examples, both healthy (n=30) and depressed (n=79) participants were presented with a set of 30 musical excerpts, representing one of five basic target emotions, and asked to rate each excerpt using five Likert scales that represented the amount of each one of those same emotions perceived in the example. Depressed patients showed moderate but consistent negative self-report biases both in the overall use of the scales and their particular application to certain target emotions, when compared to healthy controls. Also, the severity of the clinical state (depression, anxiety and alexithymia) had an effect on the self-report biases for both positive and negative emotion ratings, particularly depression and alexithymia. Only musical stimuli were used, and they were all clear examples of one of the basic emotions of happiness, sadness, fear, anger and tenderness. No neutral or ambiguous excerpts were included. Depressed patients' negative emotional bias was demonstrated using musical stimuli. This suggests that the evaluation of emotional qualities in music could become a means to discriminate between depressed and non-depressed subjects. The practical implications of the present study relate both to diagnostic uses of such perceptual evaluations, as well as a better understanding of the emotional regulation strategies of the patients. Copyright © 2010 Elsevier B.V. All rights reserved.
Abstract numeric relations and the visual structure of algebra.
Landy, David; Brookes, David; Smout, Ryan
2014-09-01
Formal algebras are among the most powerful and general mechanisms for expressing quantitative relational statements; yet, even university engineering students, who are relatively proficient with algebraic manipulation, struggle with and often fail to correctly deploy basic aspects of algebraic notation (Clement, 1982). In the cognitive tradition, it has often been assumed that skilled users of these formalisms treat situations in terms of semantic properties encoded in an abstract syntax that governs the use of notation without particular regard to the details of the physical structure of the equation itself (Anderson, 2005; Hegarty, Mayer, & Monk, 1995). We explore how the notational structure of verbal descriptions or algebraic equations (e.g., the spatial proximity of certain words or the visual alignment of numbers and symbols in an equation) plays a role in the process of interpreting or constructing symbolic equations. We propose in particular that construction processes involve an alignment of notational structures across representation systems, biasing reasoners toward the selection of formal notations that maintain the visuospatial structure of source representations. For example, in the statement "There are 5 elephants for every 3 rhinoceroses," the spatial proximity of 5 and elephants and 3 and rhinoceroses will bias reasoners to write the incorrect expression 5E = 3R, because that expression maintains the spatial relationships encoded in the source representation. In 3 experiments, participants constructed equations with given structure, based on story problems with a variety of phrasings. We demonstrate how the notational alignment approach accounts naturally for a variety of previously reported phenomena in equation construction and successfully predicts error patterns that are not accounted for by prior explanations, such as the left to right transcription heuristic.
Silverman, Michael J
2015-01-01
Treatment motivation is a key component in the early rehabilitative stages for people with substance use disorders. To date, no music therapy researcher has studied how lyric analysis interventions might affect motivation in a randomized controlled design. The primary purpose of this study was to determine the effect of lyric analysis interventions on treatment motivation in patients on a detoxification unit using a single-session wait-list control design. A secondary purpose was to determine if there were between-group differences concerning two contrasting songs used for the lyric analyses. Participants (N=104) were cluster randomized to a group lyric analysis condition or a wait-list control condition. Participants received either a "Hurt" or a "How to Save a Life" lyric analysis treatment. The Texas Christian University Treatment Motivation Scale-Client Evaluation of Self at Intake (CESI) (Simpson, 2008[2005]) was used to measure aspects of treatment motivation: problem recognition, desire for help, treatment readiness, pressures for treatment, and total motivation. Results indicated significant between-group differences in measures of problem recognition, desire for help, treatment readiness, and total motivation, with experimental participants having higher treatment motivation means than control participants. There was no difference between the two lyric analysis interventions. Although the song used for lyric analysis interventions did not affect outcome, a single group-based music therapy lyric analysis session can be an effective psychosocial treatment intervention to enhance treatment motivation in patients on a detoxification unit. Limitations, implications for clinical practice, and suggestions for future research are provided. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Interplay between affect and arousal in recognition memory.
Greene, Ciara M; Bahri, Pooja; Soto, David
2010-07-23
Emotional states linked to arousal and mood are known to affect the efficiency of cognitive performance. However, the extent to which memory processes may be affected by arousal, mood or their interaction is poorly understood. Following a study phase of abstract shapes, we altered the emotional state of participants by means of exposure to music that varied in both mood and arousal dimensions, leading to four different emotional states: (i) positive mood-high arousal; (ii) positive mood-low arousal; (iii) negative mood-high arousal; (iv) negative mood-low arousal. Following the emotional induction, participants performed a memory recognition test. Critically, there was an interaction between mood and arousal on recognition performance. Memory was enhanced in the positive mood-high arousal and in the negative mood-low arousal states, relative to the other emotional conditions. Neither mood nor arousal alone but their interaction appears most critical to understanding the emotional enhancement of memory.
19 CFR 141.90 - Notation of tariff classification and value on invoice.
Code of Federal Regulations, 2013 CFR
2013-04-01
.... (d) Importer's notations in blue or black ink. Except when invoice line data are linked to an entry... the invoice by the importer or customs broker must be in blue or black ink. [T.D. 73-175, 38 FR 17447...
19 CFR 141.90 - Notation of tariff classification and value on invoice.
Code of Federal Regulations, 2012 CFR
2012-04-01
.... (d) Importer's notations in blue or black ink. Except when invoice line data are linked to an entry... the invoice by the importer or customs broker must be in blue or black ink. [T.D. 73-175, 38 FR 17447...
19 CFR 141.90 - Notation of tariff classification and value on invoice.
Code of Federal Regulations, 2011 CFR
2011-04-01
.... (d) Importer's notations in blue or black ink. Except when invoice line data are linked to an entry... the invoice by the importer or customs broker must be in blue or black ink. [T.D. 73-175, 38 FR 17447...
19 CFR 141.90 - Notation of tariff classification and value on invoice.
Code of Federal Regulations, 2014 CFR
2014-04-01
.... (d) Importer's notations in blue or black ink. Except when invoice line data are linked to an entry... the invoice by the importer or customs broker must be in blue or black ink. [T.D. 73-175, 38 FR 17447...
7 CFR 27.69 - Classification review; notations on certificate.
Code of Federal Regulations, 2011 CFR
2011-01-01
... review of classification is made after the issuance of a cotton class certificate, the results of the... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification review; notations on certificate. 27.69... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification...
Japanese Children's Understanding of Notational Systems
ERIC Educational Resources Information Center
Takahashi, Noboru
2012-01-01
This study examined Japanese children's understanding of two Japanese notational systems: "hiragana" and "kanji". In three experiments, 126 3- to 6-year-olds were asked to name words written in hiragana or kanji as they appeared with different pictures. Consistent with Bialystok ("Journal of Experimental Child…
Solving Constraint-Satisfaction Problems In Prolog Language
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1991-01-01
Technique for solution of constraint-satisfaction problems uses definite-clause grammars of Prolog computer language. Exploits fact that grammar-rule notation viewed as "state-change notation". Facilitates development of dynamic representation performing informed as well as blind searches. Applicable to design, scheduling, and planning problems.
A Formal Messaging Notation for Alaskan Aviation Data
NASA Technical Reports Server (NTRS)
Rios, Joseph L.
2015-01-01
Data exchange is an increasingly important aspect of the National Airspace System. While many data communication channels have become more capable of sending and receiving data at higher throughput rates, there is still a need to use communication channels efficiently with limited throughput. The limitation can be based on technological issues, financial considerations, or both. This paper provides a complete description of several important aviation weather data in Abstract Syntax Notation format. By doing so, data providers can take advantage of Abstract Syntax Notation's ability to encode data in a highly compressed format. When data such as pilot weather reports, surface weather observations, and various weather predictions are compressed in such a manner, it allows for the efficient use of throughput-limited communication channels. This paper provides details on the Abstract Syntax Notation One (ASN.1) implementation for Alaskan aviation data, and demonstrates its use on real-world aviation weather data samples as Alaska has sparse terrestrial data infrastructure and data are often sent via relatively costly satellite channels.
Boundary-layer equations in generalized curvilinear coordinates
NASA Technical Reports Server (NTRS)
Panaras, Argyris G.
1987-01-01
A set of higher-order boundary-layer equations is derived valid for three-dimensional compressible flows. The equations are written in a generalized curvilinear coordinate system, in which the surface coordinates are nonorthogonal; the third axis is restricted to be normal to the surface. Also, higher-order viscous terms which are retained depend on the surface curvature of the body. Thus, the equations are suitable for the calculation of the boundary layer about arbitrary vehicles. As a starting point, the Navier-Stokes equations are derived in a tensorian notation. Then by means of an order-of-magnitude analysis, the boundary-layer equations are developed. To provide an interface between the analytical partial differentiation notation and the compact tensor notation, a brief review of the most essential theorems of the tensor analysis related to the equations of the fluid dynamics is given. Many useful quantities, such as the contravariant and the covariant metrics and the physical velocity components, are written in both notations.
A clocking discipline for two-phase digital integrated circuits
NASA Astrophysics Data System (ADS)
Noice, D. C.
1983-09-01
Sooner or later a designer of digital circuits must face the problem of timing verification so he can avoid errors caused by clock skew, critical races, and hazards. Unlike previous verification methods, such as timing simulation and timing analysis, the approach presented here guarantees correct operation despite uncertainty about delays in the circuit. The result is a clocking discipline that deals with timing abstractions only. It is not based on delay calculations; it is only concerned with the correct, synchronous operation at some clock rate. Accordingly, it may be used earlier in the design cycle, which is particularly important to integrated circuit designs. The clocking discipline consists of a notation of clocking types, and composition rules for using the types. Together, the notation and rules define a formal theory of two phase clocking. The notation defines the names and exact characteristics for different signals that are used in a two phase digital system. The notation makes it possible to develop rules for propagating the clocking types through particular circuits.
Prosody perception and musical pitch discrimination in adults using cochlear implants.
Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine
2015-07-01
This study investigated prosodic perception and musical pitch discrimination in adults using cochlear implants (CI), and examined the relationship between prosody perception scores and non-linguistic auditory measures, demographic variables, and speech recognition scores. Participants were given four subtests of the PEPS-C (profiling elements of prosody in speech-communication), the adult paralanguage subtest of the DANVA 2 (diagnostic analysis of non verbal accuracy 2), and the contour and interval subtests of the MBEA (Montreal battery of evaluation of amusia). Twelve CI users aged 25;5 to 78;0 years participated. CI participants performed significantly more poorly than normative values for New Zealand adults for PEPS-C turn-end, affect, and contrastive stress reception subtests, but were not different from the norm for the chunking reception subtest. Performance on the DANVA 2 adult paralanguage subtest was lower than the normative mean reported by Saindon (2010) . Most of the CI participants performed at chance level on both MBEA subtests. CI users have difficulty perceiving prosodic information accurately. Difficulty in understanding different aspects of prosody and music may be associated with reduced pitch perception ability.
Quantifying tone deafness in the general population.
Sloboda, John A; Wise, Karen J; Peretz, Isabelle
2005-12-01
Many people reach adulthood without acquiring significant music performance skills (singing or instrumental playing). A substantial proportion of these adults consider that this has come about because they are "not musical." Some of these people may be "true" congenital amusics, characterized by specific and substantial anomalies in the processing of musical pitch and rhythm sequences, while at the same time displaying normal processing of speech and language. It is likely, however, that many adults who believe that they are unmusical are neurologically normal. We could call these adults "false" amusics. Acquisition of musical competence has multiple personal, social, and environmental precursors. Deficiencies in these areas may lead to lack of musical achievement, despite the fact that an individual possesses the necessary underlying capacities. Adults may therefore self-define as "unmusical" or "tone-deaf" for reasons unconnected to any underlying anomaly. This paper reports on two linked research studies. The first is an interview study with adults defining themselves as tone-deaf or unmusical. The interview schedule was designed to discover what criteria are being used in their self-definitions. Preliminary results suggest that performance criteria (e.g., judging oneself as unable to sing) play a major role, even for people who claim and demonstrate no perceptual deficits. The second study reports progress on the development of new subtests for a revised version of the Montreal Battery for the Evaluation of Amusia (MBEA, Peretz et al., 2003). This currently contains six tests that allow for the assessment of melodic perception: contour, intervals, scale, rhythm, meter, and recognition memory. The MBEA does not assess two capacities that are generally accepted as central to normal music cognition: harmony and emotion. The development and norming of the emotion subtest will be described. When completed, the MBEA(R) will form a robust screening device for use with the general population, whose purpose is to discriminate "true" from "false" amusics. Such discrimination is essential to achieve a better understanding of the variety of causes of low musical achievement.
Notation for human immunogobulin subclasses.
Kunkel, H G; Fahey, J L; Franklin, E C; Osserman, E F; Terry, W D
1966-01-01
After consultation between immunologists from a number of countries a nomenclature for human immunoglobulins was proposed in 1964 and was published in the Bulletin of the World Health Organization.(1) However, that proposed scheme of notation, which has already gained wide acceptance, left several specialized areas of nomenclature still to be resolved; one of these was the subclasses of immunoglobulins. Some of the research workers most closely concerned with the problem have now agreed upon a unified scheme for the notation of the human immunoglobulin subclasses, and, in particular, of the immunoglobulin G subclass, for which two different nomenclatorial schemes have been followed in recent years. Their proposals are given below.
Stagnation and herd mentality in the biomedical sciences.
Brody, Jonathan R; Kern, Scott E
2004-09-01
Academic biomedical science is like music, painting, or other fashionable arts and politics. Concepts that are perceived to be 'in' can become widely accepted and then stagnate, remaining unchallenged for many years, independent of their scientific validity. Fads in biomedical science have been observed to last for years or decades. The reasons for herd mentality and stagnation are manifold, but their recognition allows opportunities for constructive awareness and perhaps effective countermeasures.
Freedom of Expression and Rhetorical Art: The Problems of Avant-Garde Jazz.
ERIC Educational Resources Information Center
Francesconi, Robert
Although the success of black jazz has been limited by its lack of recognition in the white-controlled music industry, its rhetorical development as an expression of black consciousness can be traced from the bebop of the 1940s and early 1950s, through the hard bop and free jazz of the 1960s, to the jazz orientation of the disco circuit in the…
ACORNS: A Tool for the Visualisation and Modelling of Atypical Development
ERIC Educational Resources Information Center
Moore, D. G.; George, R.
2011-01-01
Across many academic disciplines visualisation and notation systems are used for modelling data and developing theory, but in child development visual models are not widely used; yet researchers and students of developmental difficulties may benefit from a visualisation and notation system which can clearly map developmental outcomes and…
Orderedness and Stratificational "and" Nodes.
ERIC Educational Resources Information Center
Herrick, Earl M.
It is possible to apply Lamb's stratificational theory and analysis to English graphonomy, but additional notation devices must be used to explain particular graphemes and their characteristics. The author presents cases where Lamb's notation is inadequate. In those cases, he devises new means for performing the analysis. The result of this…
Evaluation of the U.S. Army’s Aids Education Program
1992-03-25
34 scaiac (contlinad) Initial Factor Notd : N. Preatation flatbed : Varimax Notation NedjiM Prai Factor Pattern Notated Factor Pattern Factor StrUCture...material, the graphics contained in this appendix may be reproduced in the form of slides, or enlarged, laminated and bound together for use as a flip
18 CFR 3a.31 - Classification markings and special notations.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Classification markings and special notations. 3a.31 Section 3a.31 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Classification...
18 CFR 3a.31 - Classification markings and special notations.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Classification markings and special notations. 3a.31 Section 3a.31 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Classification...
18 CFR 3a.31 - Classification markings and special notations.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Classification markings and special notations. 3a.31 Section 3a.31 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Classification...
2009-12-01
Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture
Akrami, Haleh; Moghimi, Sahar
2017-01-01
We investigated the role of culture in processing hierarchical syntactic structures in music. We examined whether violation of non-local dependencies manifest in event related potentials (ERP) for Western and Iranian excerpts by recording EEG while participants passively listened to sequences of modified/original excerpts. We also investigated oscillatory and synchronization properties of brain responses during processing of hierarchical structures. For the Western excerpt, subjective ratings of conclusiveness were marginally significant and the difference in the ERP components fell short of significance. However, ERP and behavioral results showed that while listening to culturally familiar music, subjects comprehended whether or not the hierarchical syntactic structure was fulfilled. Irregularities in the hierarchical structures of the Iranian excerpt elicited an early negativity in the central regions bilaterally, followed by two later negativities from 450-700 to 750-950 ms. The latter manifested throughout the scalp. Moreover, violations of hierarchical structure in the Iranian excerpt were associated with (i) an early decrease in the long range alpha phase synchronization, (ii) an early increase in the oscillatory activity in the beta band over the central areas, and (iii) a late decrease in the theta band phase synchrony between left anterior and right posterior regions. Results suggest that rhythmic structures and melodic fragments, representative of Iranian music, created a familiar context in which recognition of complex non-local syntactic structures was feasible for Iranian listeners. Processing of neural responses to the Iranian excerpt indicated neural mechanisms for processing of hierarchical syntactic structures in music at different levels of cortical integration.
Rhythm synchronization performance and auditory working memory in early- and late-trained musicians.
Bailey, Jennifer A; Penhune, Virginia B
2010-07-01
Behavioural and neuroimaging studies provide evidence for a possible "sensitive" period in childhood development during which musical training results in long-lasting changes in brain structure and auditory and motor performance. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 (early-trained; ET) perform better on a visuomotor task than those who begin after the age of 7 (late-trained; LT), even when matched on total years of musical training and experience. Two questions were raised regarding the findings from this experiment. First, would this group performance difference be observed using a more familiar, musically relevant task such as auditory rhythms? Second, would cognitive abilities mediate this difference in task performance? To address these questions, ET and LT musicians, matched on years of musical training, hours of current practice and experience, were tested on an auditory rhythm synchronization task. The task consisted of six woodblock rhythms of varying levels of metrical complexity. In addition, participants were tested on cognitive subtests measuring vocabulary, working memory and pattern recognition. The two groups of musicians differed in their performance of the rhythm task, such that the ET musicians were better at reproducing the temporal structure of the rhythms. There were no group differences on the cognitive measures. Interestingly, across both groups, individual task performance correlated with auditory working memory abilities and years of formal training. These results support the idea of a sensitive period during the early years of childhood for developing sensorimotor synchronization abilities via musical training.
A single dual-stream framework for syntactic computations in music and language.
Musso, Mariacristina; Weiller, Cornelius; Horn, Andreas; Glauche, Volkmer; Umarova, Roza; Hennig, Jürgen; Schneider, Albrecht; Rijntjes, Michel
2015-08-15
This study is the first to compare in the same subjects the specific spatial distribution and the functional and anatomical connectivity of the neuronal resources that activate and integrate syntactic representations during music and language processing. Combining functional magnetic resonance imaging with functional connectivity and diffusion tensor imaging-based probabilistic tractography, we examined the brain network involved in the recognition and integration of words and chords that were not hierarchically related to the preceding syntax; that is, those deviating from the universal principles of grammar and tonal relatedness. This kind of syntactic processing in both domains was found to rely on a shared network in the left hemisphere centered on the inferior part of the inferior frontal gyrus (IFG), including pars opercularis and pars triangularis, and on dorsal and ventral long association tracts connecting this brain area with temporo-parietal regions. Language processing utilized some adjacent left hemispheric IFG and middle temporal regions more than music processing, and music processing also involved right hemisphere regions not activated in language processing. Our data indicate that a dual-stream system with dorsal and ventral long association tracts centered on a functionally and structurally highly differentiated left IFG is pivotal for domain-general syntactic competence over a broad range of elements including words and chords. Copyright © 2015 Elsevier Inc. All rights reserved.
Knowledge representation for commonality
NASA Technical Reports Server (NTRS)
Yeager, Dorian P.
1990-01-01
Domain-specific knowledge necessary for commonality analysis falls into two general classes: commonality constraints and costing information. Notations for encoding such knowledge should be powerful and flexible and should appeal to the domain expert. The notations employed by the Commonality Analysis Problem Solver (CAPS) analysis tool are described. Examples are given to illustrate the main concepts.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-22
... used for substances identified as causing or contributing to allergic contact dermatitis (ACD) or other..., 4676 Columbia Parkway, Cincinnati, Ohio 45226. FOR FURTHER INFORMATION CONTACT: Naomi Hudson, NIOSH... professionals, employers, and other interested parties in protecting workers from chemical contact with the skin...
Developing Systems of Notation as a Trace of Reasoning
ERIC Educational Resources Information Center
Tillema, Erik; Hackenberg, Amy
2011-01-01
In this paper, we engage in a thought experiment about how students might notate their reasoning for composing fractions multiplicatively (taking a fraction of a fraction and determining its size in relation to the whole). In the thought experiment we differentiate between two levels of a fraction composition scheme, which have been identified in…
Symbolic Notations and Students' Achievements in Algebra
ERIC Educational Resources Information Center
Peter, Ebiendele E.; Olaoye, Adetunji A.
2013-01-01
This study focuses on symbolic notations and its impact on students' achievement in Algebra. The main reason for this study rests on the observation from personal and professional experiences on students' increasing hatred for Algebra. One hundred and fifty (150) Senior Secondary School Students (SSS) from Ojo Local Education District, Ojo, Lagos,…
Avoiding Communication in Dense Linear Algebra
2013-08-16
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 Asymptotic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6...and parallelizing Strassen’s matrix multiplication algorithm (Chapter 11). 6 Chapter 2 Preliminaries 2.1 Notation and Definitions In this section we...between computations and algo- rithms). The following definition is based on [56]: Definition 2.1. A classical algorithm in linear algebra is one that
Articulated Multimedia Physics, Lesson 3, The Arithmetic of Scientific Notation.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
As the third lesson of the Articulated Multimedia Physics Course, instructional materials are presented in this study guide. An introductory description is given for scientific notation methods. The subject content is provided in scrambled form, and the use of matrix transparencies is required for students to control their learning process.…
Ichi, Ni, 3, 4: Neural Representation of Kana, Kanji, and Arabic Numbers in Native Japanese Speakers
ERIC Educational Resources Information Center
Coderre, Emily L.; Filippi, Christopher G.; Newhouse, Paul A.; Dumas, Julie A.
2009-01-01
The Japanese language represents numbers in kana digit words (a syllabic notation), kanji numbers and Arabic numbers (logographic notations). Kanji and Arabic numbers have previously shown similar patterns of numerical processing, and because of their shared logographic properties may exhibit similar brain areas of numerical representation. Kana…
New Bouncing Curved Arrow Technique for the Depiction of Organic Mechanisms
ERIC Educational Resources Information Center
Straumanis, Andrei R.; Ruder, Suzanne M.
2009-01-01
Many students fail to develop a conceptual understanding of organic chemistry. Evidence suggests this failure goes hand-in-hand with a failure to grasp the techniques, meaning, and usefulness of curved arrow notation. Use of curved arrow notation to illustrate electrophilic addition appears to be a critical juncture in student understanding.…
ERIC Educational Resources Information Center
Williams, Donald F.; Glasser, David
1991-01-01
Introduces and develops mathematical notation to assist undergraduate students in overcoming conceptual difficulties involving the underlying mathematics of state functions, which tend to be different from functions encountered by students in previous mathematical courses, because of the need to manipulate special types of partial derivatives and…
Semantic Processing in the Production of Numerals across Notations
ERIC Educational Resources Information Center
Herrera, Amparo; Macizo, Pedro
2012-01-01
In the present work, we conducted a series of experiments to explore the processing stages required to name numerals presented in different notations. To this end, we used the semantic blocking paradigm previously used in psycholinguist studies. We found a facilitative effect of the semantic blocked context relative to the mixed context for Arabic…
Raising the Degree of Service-Orientation of a SOA-based Software System: A Case Study
2009-12-01
protocols, as well as executable processes that can be compiled into runtime scripts” [2] The Business Process Modeling Notation ( BPMN ) provides a...Notation ( BPMN ) 1.2. Jan. 2009. URL: http://www.omg.org/spec/ BPMN /1.2/ [25] .NET Framework Developer Center. .NET Remoting Overview. 2003. URL: http
Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering
2014-06-01
x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF ACRONYMS AND ABBREVIATIONS BOM Base Object Model BPMN Business Process Model & Notation DOD...SysML. There are many variants such as the Unified Profile for DODAF/MODAF (UPDM) and Business Process Model & Notation ( BPMN ) that have origins in
43 CFR 3815.8 - Notation required in application for patent; conditions required in patent.
Code of Federal Regulations, 2012 CFR
2012-10-01
... patent; conditions required in patent. 3815.8 Section 3815.8 Public Lands: Interior Regulations Relating... Notation required in application for patent; conditions required in patent. (a) Every application for patent for any minerals located subject to this Act must bear on its face, before being executed by the...
43 CFR 3815.8 - Notation required in application for patent; conditions required in patent.
Code of Federal Regulations, 2011 CFR
2011-10-01
... patent; conditions required in patent. 3815.8 Section 3815.8 Public Lands: Interior Regulations Relating... Notation required in application for patent; conditions required in patent. (a) Every application for patent for any minerals located subject to this Act must bear on its face, before being executed by the...
43 CFR 3815.8 - Notation required in application for patent; conditions required in patent.
Code of Federal Regulations, 2014 CFR
2014-10-01
... patent; conditions required in patent. 3815.8 Section 3815.8 Public Lands: Interior Regulations Relating... Notation required in application for patent; conditions required in patent. (a) Every application for patent for any minerals located subject to this Act must bear on its face, before being executed by the...
43 CFR 3815.8 - Notation required in application for patent; conditions required in patent.
Code of Federal Regulations, 2013 CFR
2013-10-01
... patent; conditions required in patent. 3815.8 Section 3815.8 Public Lands: Interior Regulations Relating... Notation required in application for patent; conditions required in patent. (a) Every application for patent for any minerals located subject to this Act must bear on its face, before being executed by the...
NASA Astrophysics Data System (ADS)
Leng, Xiaodan
The trion model was developed using the Mountcastle organizational principle for the column as the basic neuronal network in the cortex and the physical system analogy of Fisher's ANNNI spin model. An essential feature is that it is highly structured in time and in spatial connections. Simulations of a network of trions have shown that large numbers of quasi-stable, periodic spatial-temporal firing patterns can be excited. Characteristics of these patterns include the quality of being readily enhanced by only a small change in connection strengths, and that the patterns evolve in certain natural sequences from one to another. With only somewhat different parameters than used for studying memory and pattern recognition, much more flowing and intriguing patterns emerged from the simulations. The results were striking when these probabilistic evolutions were mapped onto pitches and instruments to produce music: For example different simple mappings of the same evolution give music having the "flavor" of a minuet, a waltz, folk music, or styles of specific periods. A theme can be learned so that evolutions have this theme and its variations reoccurring more often. That the trion model is a viable model for the coding of musical structure in human composition and perception is suggested. It is further proposed that model is relevant for examining creativity in the higher cognitive functions of mathematics and chess, which are similar to music. An even higher level of cortical organization was modeled by coupling together several trion networks. Further, one of the crucial features of higher brain function, especially in music composition or appreciation, is the role of emotion and mood as controlled by the many neuromodulators or neuropeptides. The MILA model whose underlying basis is zero-level representation of Kac-Moody algebra is used to modulate periodically the firing threshold of each network. Our preliminary results show that the introduction of "neuromodulation" into the dynamics of a few coupled trion networks greatly enhanced the richness of the music. Neuromodulation plays a very important role in cognitive processes. I discuss many aspects of cognitive processes such as, leaning and memory, innervation of cortical functions and coordination between music and emotions. The implications of my work are discussed.
Mincarone, Pierpaolo; Leo, Carlo Giacomo; Trujillo-Martín, Maria Del Mar; Manson, Jan; Guarino, Roberto; Ponzini, Giuseppe; Sabina, Saverio
2018-04-01
The importance of working toward quality improvement in healthcare implies an increasing interest in analysing, understanding and optimizing process logic and sequences of activities embedded in healthcare processes. Their graphical representation promotes faster learning, higher retention and better compliance. The study identifies standardized graphical languages and notations applied to patient care processes and investigates their usefulness in the healthcare setting. Peer-reviewed literature up to 19 May 2016. Information complemented by a questionnaire sent to the authors of selected studies. Systematic review conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. Five authors extracted results of selected studies. Ten articles met the inclusion criteria. One notation and language for healthcare process modelling were identified with an application to patient care processes: Business Process Model and Notation and Unified Modeling Language™. One of the authors of every selected study completed the questionnaire. Users' comprehensibility and facilitation of inter-professional analysis of processes have been recognized, in the filled in questionnaires, as major strengths for process modelling in healthcare. Both the notation and the language could increase the clarity of presentation thanks to their visual properties, the capacity of easily managing macro and micro scenarios, the possibility of clearly and precisely representing the process logic. Both could increase guidelines/pathways applicability by representing complex scenarios through charts and algorithms hence contributing to reduce unjustified practice variations which negatively impact on quality of care and patient safety.
Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir
2010-09-01
Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.
Clark, Callie A M; Sacrey, Lori-Ann R; Whishaw, Ian Q
2009-09-15
External cues, including familiar music, can release Parkinson's disease patients from catalepsy but the neural basis of the effect is not well understood. In the present study, posturography, the study of posture and its allied reflexes, was used to develop an animal model that could be used to investigate the underlying neural mechanisms of this sound-induced behavioral activation. In the rat, akinetic catalepsy induced by a dopamine D2 receptor antagonist (haloperidol 5mg/kg) can model human catalepsy. Using this model, two experiments examined whether novel versus familiar sound stimuli could interrupt haloperidol-induced catalepsy in the rat. Rats were placed on a variably inclined grid and novel or familiar auditory cues (single key jingle or multiple key jingles) were presented. The dependent variable was movement by the rats to regain equilibrium as assessed with a movement notation score. The sound cues enhanced movements used to regain postural stability and familiar sound stimuli were more effective than unfamiliar sound stimuli. The results are discussed in relation to the idea that nonlemniscal and lemniscal auditory pathways differentially contribute to behavioral activation versus tonotopic processing of sound.
ERIC Educational Resources Information Center
Blanton, Maria; Brizuela, Bárbara M.; Gardiner, Angela Murphy; Sawrey, Katie; Newman-Owens, Ashley
2017-01-01
Recent research suggests that children in elementary grades have some facility with variable and variable notation in ways that warrant closer attention. We report here on an empirically developed progression in first-grade children's thinking about these concepts in functional relationships. Using learning trajectories research as a framework for…
Glossing for Improved Comprehension: Progress and Prospect.
ERIC Educational Resources Information Center
Otto, Wayne; Hayes, Bernie
The terms gloss and glossing are being used to designate and describe the systematic use of marginal notes and other extra-text notations to direct readers' attention while they read. Gloss notations may serve as an aid to direct students to content areas of text and to levels of understanding that make optimal use of their current--and sometimes…
2017-06-01
11 Table 1 Notation for fabric and ensemble resistances . .......................................... 13 Thermal manikin...Table 1 Notation for fabric and ensemble resistances .................................................. 13 Table 2 Weight reduction of CB garment...samples were tested on a Sweating Guarded Hot Plate (SGHP) to measure fabric thermal and evaporative resistance , respectively. The ensembles were tested
Reading a Note, Reading a Mind: Children's Notating Skills and Understanding of Mind
ERIC Educational Resources Information Center
Leyva, Diana; Hopson, Sarah; Nichols, Ashley
2012-01-01
Are children's understanding of mental states (understanding of mind) related to their notating skills, that is, their ability to produce and read written marks to convey information about objects and number? Fifty-three preschoolers and kindergarteners were presented with a dictation task where they produced some written marks and were later…
Diagrams and Math Notation in E-Learning: Growing Pains of a New Generation
ERIC Educational Resources Information Center
Smith, Glenn Gordon; Ferguson, David
2004-01-01
Current e-learning environments are ill-suited to college mathematics. Instructors/students struggle to post diagrams and math notation. A new generation of math-friendly e-learning tools, including WebEQ, bundled with Blackboard 6, and NetTutor's Whiteboard, address these problems. This paper compares these two systems using criteria for ideal…
ERIC Educational Resources Information Center
Kell, Clare; Sweet, John
2017-01-01
This paper shows how peer observation of learning and teaching (POLT) discussions can be augmented through the use of a dynamic visual notation that makes visible for interpretation, elements of teacher-learner and learner-earner nonverbal interactions. Making visible the nonverbal, physical, spatial and kinesics (eye-based) elements of…
The Use of Force Notation to Detect Students' Misconceptions: Mutual Interactions Case
ERIC Educational Resources Information Center
Serhane, Ahcene; Zeghdaoui, Abdelhamid; Debiache, Mehdi
2017-01-01
Using a conventional notation for representing forces on diagrams, students were presented with questions on the interaction between two objects. The results show that complete understanding of Newton's Third Law of Motion is quite rare, and that some problems relate to misunderstanding which force acts on each body. The use of the terms…
19 CFR 141.90 - Notation of tariff classification and value on invoice.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Notation of tariff classification and value on... classification and value on invoice. (a) [Reserved] (b) Classification and rate of duty. The importer or customs... invoice value which have been made to arrive at the aggregate entered value. In addition, the entered unit...
Elmarakeby, Haitham; Arefiyan, Mostafa; Myers, Elijah; Li, Song; Grene, Ruth; Heath, Lenwood S
2017-12-01
The Beacon Editor is a cross-platform desktop application for the creation and modification of signal transduction pathways using the Systems Biology Graphical Notation Activity Flow (SBGN-AF) language. Prompted by biologists' requests for enhancements, the Beacon Editor includes numerous powerful features for the benefit of creation and presentation.
Children's Use of Variables and Variable Notation to Represent Their Algebraic Ideas
ERIC Educational Resources Information Center
Brizuela, Bárbara M.; Blanton, Maria; Sawrey, Katharine; Newman-Owens, Ashley; Murphy Gardiner, Angela
2015-01-01
In this article, we analyze a first grade classroom episode and individual interviews with students who participated in that classroom event to provide evidence of the variety of understandings about variable and variable notation held by first grade children approximately six years of age. Our findings illustrate that given the opportunity,…
NASA Astrophysics Data System (ADS)
Nordström, Jan; Ghasemi, Fatemeh
2018-05-01
A few notational errors were recently discovered in the above publication. The notation used in the note is valid for fluxes of the form fL (u) =AL u ,fR (v) =AR v where AL =AR is m × m constant symmetric matrix.
Developing a benchmark for emotional analysis of music
Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400
Memory for musical tones: the impact of tonality and the creation of false memories.
Vuvan, Dominique T; Podolak, Olivia M; Schmuckler, Mark A
2014-01-01
Although the relation between tonality and musical memory has been fairly well-studied, less is known regarding the contribution of tonal-schematic expectancies to this relation. Three experiments investigated the influence of tonal expectancies on memory for single tones in a tonal melodic context. In the first experiment, listener responses indicated superior recognition of both expected and unexpected targets in a major tonal context than for moderately expected targets. Importantly, and in support of previous work on false memories, listener responses also revealed a higher false alarm rate for expected than unexpected targets. These results indicate roles for tonal schematic congruency as well as distinctiveness in memory for melodic tones. The second experiment utilized minor melodies, which weakened tonal expectancies since the minor tonality can be represented in three forms simultaneously. Finally, tonal expectancies were abolished entirely in the third experiment through the use of atonal melodies. Accordingly, the expectancy-based results observed in the first experiment were disrupted in the second experiment, and disappeared in the third experiment. These results are discussed in light of schema theory, musical expectancy, and classic memory work on the availability and distinctiveness heuristics.
Correlation of vocals and lyrics with left temporal musicogenic epilepsy.
Tseng, Wei-En J; Lim, Siew-Na; Chen, Lu-An; Jou, Shuo-Bin; Hsieh, Hsiang-Yao; Cheng, Mei-Yun; Chang, Chun-Wei; Li, Han-Tao; Chiang, Hsing-I; Wu, Tony
2018-03-15
Whether the cognitive processing of music and speech relies on shared or distinct neuronal mechanisms remains unclear. Music and language processing in the brain are right and left temporal functions, respectively. We studied patients with musicogenic epilepsy (ME) that was specifically triggered by popular songs to analyze brain hyperexcitability triggered by specific stimuli. The study included two men and one woman (all right-handed, aged 35-55 years). The patients had sound-triggered left temporal ME in response to popular songs with vocals, but not to instrumental, classical, or nonvocal piano solo versions of the same song. Sentimental lyrics, high-pitched singing, specificity/familiarity, and singing in the native language were the most significant triggering factors. We found that recognition of the human voice and analysis of lyrics are important causal factors in left temporal ME and provide observational evidence that sounds with speech structure are predominantly processed in the left temporal lobe. A literature review indicated that language-associated stimuli triggered ME in the left temporal epileptogenic zone at a nearly twofold higher rate compared with the right temporal region. Further research on ME may enhance understanding of the cognitive neuroscience of music. © 2018 New York Academy of Sciences.
Memory for musical tones: the impact of tonality and the creation of false memories
Vuvan, Dominique T.; Podolak, Olivia M.; Schmuckler, Mark A.
2014-01-01
Although the relation between tonality and musical memory has been fairly well-studied, less is known regarding the contribution of tonal-schematic expectancies to this relation. Three experiments investigated the influence of tonal expectancies on memory for single tones in a tonal melodic context. In the first experiment, listener responses indicated superior recognition of both expected and unexpected targets in a major tonal context than for moderately expected targets. Importantly, and in support of previous work on false memories, listener responses also revealed a higher false alarm rate for expected than unexpected targets. These results indicate roles for tonal schematic congruency as well as distinctiveness in memory for melodic tones. The second experiment utilized minor melodies, which weakened tonal expectancies since the minor tonality can be represented in three forms simultaneously. Finally, tonal expectancies were abolished entirely in the third experiment through the use of atonal melodies. Accordingly, the expectancy-based results observed in the first experiment were disrupted in the second experiment, and disappeared in the third experiment. These results are discussed in light of schema theory, musical expectancy, and classic memory work on the availability and distinctiveness heuristics. PMID:24971071
An integrative review of the enjoyment of sadness associated with music.
Eerola, Tuomas; Vuoskoski, Jonna K; Peltola, Henna-Riikka; Putkinen, Vesa; Schäfer, Katharina
2017-11-23
The recent surge of interest towards the paradoxical pleasure produced by sad music has generated a handful of theories and an array of empirical explorations on the topic. However, none of these have attempted to weigh the existing evidence in a systematic fashion. The present work puts forward an integrative framework laid out over three levels of explanation - biological, psycho-social, and cultural - to compare and integrate the existing findings in a meaningful way. First, we review the evidence pertinent to experiences of pleasure associated with sad music from the fields of neuroscience, psychophysiology, and endocrinology. Then, the psychological and interpersonal mechanisms underlying the recognition and induction of sadness in the context of music are combined with putative explanations ranging from social surrogacy and nostalgia to feelings of being moved. Finally, we address the cultural aspects of the paradox - the extent to which it is embedded in the Western notion of music as an aesthetic, contemplative object - by synthesising findings from history, ethnography, and empirical studies. Furthermore, we complement these explanations by considering the particularly significant meanings that sadness portrayed in art can evoke in some perceivers. Our central claim is that one cannot attribute the enjoyment of sadness fully to any one of these levels, but to a chain of functionalities afforded by each level. Each explanatory level has several putative explanations and its own shift towards positive valence, but none of them deliver the full transformation from a highly negative experience to a fully enjoyable experience alone. The current evidence within this framework ranges from weak to non-existent at the biological level, moderate at the psychological level, and suggestive at the cultural level. We propose a series of focussed topics for future investigation that would allow to deconstruct the drivers and constraints of the processes leading to pleasurable music-related sadness. Copyright © 2017 Elsevier B.V. All rights reserved.
Alcohol brand appearances in US popular music.
Primack, Brian A; Nuzzo, Erin; Rice, Kristen R; Sargent, James D
2012-03-01
The average US adolescent is exposed to 34 references to alcohol in popular music daily. Although brand recognition is an independent, potent risk factor for alcohol outcomes among adolescents, alcohol brand appearances in popular music have not been assessed systematically. We aimed to determine the prevalence of and contextual elements associated with alcohol brand appearances in US popular music. Qualitative content analysis. We used Billboard Magazine to identify songs to which US adolescents were most exposed in 2005-07. For each of the 793 songs, two trained coders analyzed independently the lyrics of each song for references to alcohol and alcohol brand appearances. Subsequent in-depth assessments utilized Atlas.ti to determine contextual factors associated with each of the alcohol brand appearances. Our final code book contained 27 relevant codes representing six categories: alcohol types, consequences, emotional states, activities, status and objects. Average inter-rater reliability was high (κ = 0.80), and all differences were easily adjudicated. Of the 793 songs in our sample, 169 (21.3%) referred explicitly to alcohol, and of those, 41 (24.3%) contained an alcohol brand appearance. Consequences associated with alcohol were more often positive than negative (41.5% versus 17.1%, P < 0.001). Alcohol brand appearances were associated commonly with wealth (63.4%), sex (58.5%), luxury objects (51.2%), partying (48.8%), other drugs (43.9%) and vehicles (39.0%). One in five songs sampled from US popular music had explicit references to alcohol, and one-quarter of these mentioned a specific alcohol brand. These alcohol brand appearances are associated commonly with a luxury life-style characterized by wealth, sex, partying and other drugs. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.
Alcohol Brand Appearances in U.S. Popular Music
Primack, Brian A.; Nuzzo, Erin; Rice, Kristen R.; Sargent, James D.
2011-01-01
Aims The average US adolescent is exposed to 34 references to alcohol in popular music daily. Although brand recognition is an independent, potent risk factor for alcohol outcomes among adolescents, alcohol brand appearances in popular music have not been systematically assessed. We aimed to determine the prevalence of and contextual elements associated with alcohol brand appearances in U.S. popular music. Design Qualitative content analysis. Setting We used Billboard Magazine to identify songs to which US adolescents were most exposed in 2005-2007. For each of the 793 songs, two trained coders independently analyzed the lyrics of each song for references to alcohol and alcohol brand appearances. Subsequent in-depth assessments utilised Atlas.ti to determine contextual factors associated with each of the alcohol brand appearances. Measurements Our final code book contained 27 relevant codes representing 6 categories: alcohol types, consequences, emotional states, activities, status, and objects. Findings Average inter-rater reliability was high (κ=0.80), and all differences were easily adjudicated. Of the 793 songs in our sample, 169 (21.3%) explicitly referred to alcohol, and of those, 41 (24.3%) contained an alcohol brand appearance. Consequences associated with alcohol were more often positive than negative (41.5% vs. 17.1%, P<.001). Alcohol brand appearances were commonly associated with wealth (63.4%), sex (58.5%), luxury objects (51.2%), partying (48.8%), other drugs (43.9%), and vehicles (39.0%). Conclusions One-in-five songs sampled from U.S. popular music had explicit references to alcohol, and one quarter of these mentioned a specific alcohol brand. These alcohol brand appearances are commonly associated with a luxury lifestyle characterised by wealth, sex, partying, and other drugs. PMID:22011113
Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers
2016-07-01
to go somewhere but you did not say where”), (Kennedy et al. 2007; Perzanowski et al 2000a, 2000b). Many efforts are currently focused on developing...start/end of a gesture. They reported a 98% accuracy using a modified handwriting recognition statistical algorithm. The same algorithm was tested...to the device (light switch, music player) and saying “lights on” or “volume up” (Wilson and Shafer 2003). The Nintendo Wii remote controller has
Huijgen, Josefien; Dellacherie, Delphine; Tillmann, Barbara; Clément, Sylvain; Bigand, Emmanuel; Dupont, Sophie; Samson, Séverine
2015-10-01
Previous research has indicated that the medial temporal lobe (MTL), and more specifically the perirhinal cortex, plays a role in the feeling of familiarity for non-musical stimuli. Here, we examined contribution of the MTL to the feeling of familiarity for music by testing patients with unilateral MTL lesions. We used a gating paradigm: segments of familiar and unfamiliar musical excerpts were played with increasing durations (250, 500, 1000, 2000, 4000 ms and complete excerpts), and participants provided familiarity judgments for each segment. Based on the hypothesis that patients might need longer segments than healthy controls (HC) to identify excerpts as familiar, we examined the onset of the emergence of familiarity in HC, patients with a right MTL resection (RTR), and patients with a left MTL resection (LTR). In contrast to our hypothesis, we found that the feeling of familiarity was relatively spared in patients with a right or left MTL lesion, even for short excerpts. All participants were able to differentiate familiar from unfamiliar excerpts as early as 500 ms, although the difference between familiar and unfamiliar judgements was greater in HC than in patients. These findings suggest that a unilateral MTL lesion does not impair the emergence of the feeling of familiarity. We also assessed whether the dynamics of the musical excerpt (linked to the type and amount of information contained in the excerpts) modulated the onset of the feeling of familiarity in the three groups. The difference between familiar and unfamiliar judgements was greater for high than for low-dynamic excerpts for HC and RTR patients, but not for LTR patients. This indicates that the LTR group did not benefit in the same way from dynamics. Overall, our results imply that the recognition of previously well-learned musical excerpts does not depend on the integrity of either right or the left MTL structures. Patients with a unilateral MTL resection may compensate for the effects of unilateral damage by using the intact contralateral temporal lobe. Moreover, we suggest that remote semantic memory for music might depend more strongly on neocortical structures rather than the MTL. Copyright © 2015. Published by Elsevier Ltd.
Hearing, listening, action: Enhancing nursing practice through aural awareness education.
Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa
2014-01-01
Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses' ability to assess patients effectively. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilized an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patients' experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.
Hearing, Listening, Action: Enhancing nursing practice through aural awareness education.
Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa
2014-03-29
Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses ability to effectively assess patients. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilised an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patient's experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students' reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.
Development of a Notational Analysis System for Selected Soccer Skills of a Women's College Team
ERIC Educational Resources Information Center
Thomas, Camille; Fellingham, Gilbert; Vehrs, Pat
2009-01-01
The purposes of this study were to develop a notational system to evaluate passing, dribbling, first touch, and individual defensive skills as they relate to success during women's soccer games and to develop a statistical model to weigh the importance of each skill on creating scoring opportunities. Sequences of skills in ten games of a National…
ERIC Educational Resources Information Center
Watson, Kevin E.
2010-01-01
The purpose of the present study was to investigate the effects of aural versus notated pedagogical materials on achievement and self-efficacy in instrumental jazz improvisation performance. A secondary purpose of this study was to investigate how achievement and self-efficacy may be related to selected experience variables. The sample for the…
ERIC Educational Resources Information Center
Al-Dor, Nira
2006-01-01
The objective of this study is to present "The Spiral Model for the Development of Coordination" (SMDC), a learning model that reflects the complexity and possibilities embodied in the learning of movement notation Eshkol-Wachman (EWMN), an Israeli invention. This model constituted the infrastructure for a comprehensive study that examined the…
Assessing Resource Value and Relationships Between Objectives in Effects-Based Operations
2006-03-01
terms of a set of desired end states for the campaign’s system of systems . Value theory was used to identify the resource’s value in terms of the direct...2-1 2.3. System of Systems Analysis (SoSA) Definitions and Notation .................. 2-6 2.4. Mathematical Notation to...Describe an Enemy System ............................... 2-7 2.5. Weighting Techniques
Code of Federal Regulations, 2010 CFR
2010-04-01
...) any third-party communication notations required to be placed pursuant to § 301.6110-4(a) on the face... a written determination on which a third-party communication notation has been placed pursuant to... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Notice and time requirements; actions to...
Cognitive Development of Applying the Chain Rule through Three Worlds of Mathematics
ERIC Educational Resources Information Center
Kabael, Tangul Uygur
2010-01-01
The derivative of a composite function, taken with the chain rule is one of the important notions in calculus. This paper describes a study conducted in Turkey that shows that the chain rule was given with the formula in function notation and/or the Leibniz notation without relating these formulas to life-related problem situations in the…
A Non-technical User-Oriented Display Notation for XACML Conditions
NASA Astrophysics Data System (ADS)
Stepien, Bernard; Felty, Amy; Matwin, Stan
Ideally, access control to resources in complex IT systems ought to be handled by business decision makers who own a given resource (e.g., the pay and benefits section of an organization should decide and manage the access rules to the payroll system). To make this happen, the security and database communities need to develop vendor-independent access management tools, useable by decision makers, rather than technical personnel detached from a given business function. We have developed and implemented such tool, based on XACML. The XACML is an important emerging tool for managing complex access control applications. As a formal notation, based on an XML schema representing the grammar of a given application, XACML is precise and non-ambiguous. But this very property puts it out of reach of non-technical users. We propose a new notation for displaying and editing XACML rules that is independent of XML, and we develop an editor for it. Our notation combines a tree representation of logical expressions with an accessible natural language layer. Our early experience indicates that such rules can be grasped by non-technical users wishing to develop and control rules for accessing their own resources.
NASA Astrophysics Data System (ADS)
Secmen, Mustafa
2011-10-01
This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.
Music Perception with Cochlear Implants: A Review
McDermott, Hugh J.
2004-01-01
The acceptance of cochlear implantation as an effective and safe treatment for deafness has increased steadily over the past quarter century. The earliest devices were the first implanted prostheses found to be successful in compensating partially for lost sensory function by direct electrical stimulation of nerves. Initially, the main intention was to provide limited auditory sensations to people with profound or total sensorineural hearing impairment in both ears. Although the first cochlear implants aimed to provide patients with little more than awareness of environmental sounds and some cues to assist visual speech-reading, the technology has advanced rapidly. Currently, most people with modern cochlear implant systems can understand speech using the device alone, at least in favorable listening conditions. In recent years, an increasing research effort has been directed towards implant users’ perception of nonspeech sounds, especially music. This paper reviews that research, discusses the published experimental results in terms of both psychophysical observations and device function, and concludes with some practical suggestions about how perception of music might be enhanced for implant recipients in the future. The most significant findings of past research are: (1) On average, implant users perceive rhythm about as well as listeners with normal hearing; (2) Even with technically sophisticated multiple-channel sound processors, recognition of melodies, especially without rhythmic or verbal cues, is poor, with performance at little better than chance levels for many implant users; (3) Perception of timbre, which is usually evaluated by experimental procedures that require subjects to identify musical instrument sounds, is generally unsatisfactory; (4) Implant users tend to rate the quality of musical sounds as less pleasant than listeners with normal hearing; (5) Auditory training programs that have been devised specifically to provide implant users with structured musical listening experience may improve the subjective acceptability of music that is heard through a prosthesis; (6) Pitch perception might be improved by designing innovative sound processors that use both temporal and spatial patterns of electric stimulation more effectively and precisely to overcome the inherent limitations of signal coding in existing implant systems; (7) For the growing population of implant recipients who have usable acoustic hearing, at least for low-frequency sounds, perception of music is likely to be much better with combined acoustic and electric stimulation than is typical for deaf people who rely solely on the hearing provided by their prostheses. PMID:15497033
Music perception with cochlear implants: a review.
McDermott, Hugh J
2004-01-01
The acceptance of cochlear implantation as an effective and safe treatment for deafness has increased steadily over the past quarter century. The earliest devices were the first implanted prostheses found to be successful in compensating partially for lost sensory function by direct electrical stimulation of nerves. Initially, the main intention was to provide limited auditory sensations to people with profound or total sensorineural hearing impairment in both ears. Although the first cochlear implants aimed to provide patients with little more than awareness of environmental sounds and some cues to assist visual speech-reading, the technology has advanced rapidly. Currently, most people with modern cochlear implant systems can understand speech using the device alone, at least in favorable listening conditions. In recent years, an increasing research effort has been directed towards implant users' perception of nonspeech sounds, especially music. This paper reviews that research, discusses the published experimental results in terms of both psychophysical observations and device function, and concludes with some practical suggestions about how perception of music might be enhanced for implant recipients in the future. The most significant findings of past research are: (1) On average, implant users perceive rhythm about as well as listeners with normal hearing; (2) Even with technically sophisticated multiple-channel sound processors, recognition of melodies, especially without rhythmic or verbal cues, is poor, with performance at little better than chance levels for many implant users; (3) Perception of timbre, which is usually evaluated by experimental procedures that require subjects to identify musical instrument sounds, is generally unsatisfactory; (4) Implant users tend to rate the quality of musical sounds as less pleasant than listeners with normal hearing; (5) Auditory training programs that have been devised specifically to provide implant users with structured musical listening experience may improve the subjective acceptability of music that is heard through a prosthesis; (6) Pitch perception might be improved by designing innovative sound processors that use both temporal and spatial patterns of electric stimulation more effectively and precisely to overcome the inherent limitations of signal coding in existing implant systems; (7) For the growing population of implant recipients who have usable acoustic hearing, at least for low-frequency sounds, perception of music is likely to be much better with combined acoustic and electric stimulation than is typical for deaf people who rely solely on the hearing provided by their prostheses.
ERIC Educational Resources Information Center
Ruder, Suzanne M.; Straumanis, Andrei R.
2009-01-01
A critical stage in the process of developing a conceptual understanding of organic chemistry is learning to use curved arrow notation. From this stems the ability to predict reaction products and mechanisms beyond the realm of memorization. Since evaluation (i.e., testing) is known to be a key driver of student learning, it follows that a new…
ERIC Educational Resources Information Center
Dania, Aspasia; Tyrovola, Vasiliki; Koutsouba, Maria
2017-01-01
The aim of this paper is to present the design and evaluate the impact of a Laban Notation-based method for Teaching Dance (LANTD) on novice dancers' performance, in the case of Greek traditional dance. In this research, traditional dance is conceived in its "second existence" as a kind of presentational activity performed outside its…
Goal Structured Notation in a Radiation Hardening Safety Case for COTS-Based Spacecraft
NASA Technical Reports Server (NTRS)
Witulski, Arthur; Austin, Rebekah; Reed, Robert; Karsai, Gabor; Mahadevan, Nag; Sierawski, Brian; Evans, John; LaBel, Ken
2016-01-01
A systematic approach is presented to constructing a radiation assurance case using Goal Structured Notation (GSN) for spacecraft containing COTS parts. The GSN paradigm is applied to an SRAM single-event upset experiment board designed to fly on a CubeSat November 2016. Construction of a radiation assurance case without use of hardened parts or extensive radiation testing is discussed.
M&S Journal. Volume 8, Issue 2, Summer 2013
2013-01-01
Modeling Notation ( BPMN ) [White and Miers, 2008], and the integration of the modeling notation with executable simulation engines [Anupindi 2005...activities and the supporting IT in BPMN and use that to compute MOE for a mission instance. Requirements for Modeling Missions To understand the...representation versus impact computation tradeoffs we selected BPMN , along with some proposed extensions to represent information dependencies, as the