Science.gov

Sample records for order audiovisual learning

  1. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  2. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  3. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    PubMed

    Yamamoto, Shinya; Miyazaki, Makoto; Iwano, Takayuki; Kitazawa, Shigeru

    2012-01-01

    After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation). In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration). We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone) in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  4. The Role of Audiovisual Mass Media News in Language Learning

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  5. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    ERIC Educational Resources Information Center

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  6. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    ERIC Educational Resources Information Center

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  7. Effect on Intended and Incidental Learning from the Use of Learning Objectives with an Audiovisual Presentation.

    ERIC Educational Resources Information Center

    Main, Robert

    This paper reports a controlled field experiment conducted to determine the effects and interaction of five independent variables with an audiovisual slide-tape program: presence of learning objectives, location of learning objectives, type of knowledge, sex of learner, and retention of learning. Participants were university students in a general…

  8. A Comparative Study of Organizational Characteristics Used in Learning Resources Centers and Traditionally Organized Library and Audio-Visual Service Facilities in Four Minnesota and Wisconsin Senior Colleges.

    ERIC Educational Resources Information Center

    Burlingame, Dwight Francis

    An investigation was made of the organizational characteristics of two college learning resource centers as compared with two traditionally organized college libraries with separate audiovisual units in order to determine the advantages of each organizational type. Interviews, observation, and examination of relevant documents were used to…

  9. Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations.

    PubMed

    Butler, Andrew J; James, Thomas W; James, Karin Harman

    2011-11-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.

  10. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    ERIC Educational Resources Information Center

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  11. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    ERIC Educational Resources Information Center

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  12. Audiovisual synchrony perception for speech and music assessed using a temporal order judgment task.

    PubMed

    Vatakis, Argiro; Spence, Charles

    2006-01-23

    This study investigated people's sensitivity to audiovisual asynchrony in briefly-presented speech and musical videos. A series of speech (letters and syllables) and guitar and piano music (single and double notes) video clips were presented randomly at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. The accuracy of participants' TOJ performance (measured in terms of the just noticeable difference; JND) was significantly better for the speech than for either the guitar or piano music video clips, suggesting that people are more sensitive to asynchrony for speech than for music stimuli. The visual stream had to lead the auditory stream for the point of subjective simultaneity (PSS) to be achieved in the piano music clips while auditory leads were typically required for the guitar music clips. The PSS values obtained for the speech stimuli varied substantially as a function of the particular speech sound presented. These results provide the first empirical evidence regarding people's sensitivity to audiovisual asynchrony for musical stimuli. Our results also demonstrate that people's sensitivity to asynchrony in speech stimuli is better than has been suggested on the basis of previous research using continuous speech streams as stimuli.

  13. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  14. Time-dependent changes in learning audiovisual associations: a single-trial fMRI study.

    PubMed

    Gonzalo, D; Shallice, T; Dolan, R

    2000-03-01

    Functional imaging studies of learning and memory have primarily focused on stimulus material presented within a single modality (see review by Gabrieli, 1998, Annu. Rev. Psychol. 49: 87-115). In the present study we investigated mechanisms for learning material presented in visual and auditory modalities, using single-trial functional magnetic resonance imaging. We evaluated time-dependent learning effects under two conditions involving presentation of consistent (repeatedly paired in the same combination) or inconsistent (items presented randomly paired) pairs. We also evaluated time-dependent changes for bimodal (auditory and visual) presentations relative to a condition in which auditory stimuli were repeatedly presented alone. Using a time by condition analysis to compare neural responses to consistent versus inconsistent audiovisual pairs, we found significant time-dependent learning effects in medial parietal and right dorsolateral prefrontal cortices. In contrast, time-dependent effects were seen in left angular gyrus, bilateral anterior cingulate gyrus, and occipital areas bilaterally. A comparison of paired (bimodal) versus unpaired (unimodal) conditions was associated with time-dependent changes in posterior hippocampal and superior frontal regions for both consistent and inconsistent pairs. The results provide evidence that associative learning for stimuli presented in different sensory modalities is supported by neural mechanisms similar to those described for other kinds of memory processes. The involvement of posterior hippocampus and superior frontal gyrus in bimodal learning for both consistent and inconsistent pairs supports a putative function for these regions in associative learning independent of sensory modality.

  15. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    PubMed

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes.

  16. Cataloging audiovisual materials: a new dimension.

    PubMed Central

    Knotts, M A; Mueller, D

    1975-01-01

    A new more comprehensive system for cataloging audiovisual materials is described. Existing audiovisual cataloging systems contain mostly descriptive information, publishers' or producers' summaries, and order information. This paper discusses the addition of measurable learning objectives to this standard information, thereby enabling the potential user to determine what can be learned from a particular audiovisual unit. The project included media in nursing only. A committee of faculty and students from the University of Alabama in Birmingham School of Nursing reviewed the materials. The system was field-tested at nursing schools throughout Alabama; the schools offered four different types of programs. The system and its sample product, the AVLOC catalog, were also evaluated by medical librarians, media specialists, and other nursing instructors throughout the United States. PMID:50106

  17. Online dissection audio-visual resources for human anatomy: Undergraduate medical students' usage and learning outcomes.

    PubMed

    Choi-Lundberg, Derek L; Cuellar, William A; Williams, Anne-Marie M

    2016-11-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection sessions, representing at most 58% ± 20 of assigned dissectors. Approximately 50% of students accessed all available DAVR by the end of semester, while 10% accessed none. Ninety percent of survey respondents (response rate 58%) generally agreed that DAVR improved their preparation for and learning from dissection when used. Of several learning resources, only DAVR usage had a significant positive correlation (P = 0.002) with feeling prepared for dissection. Results on cadaveric anatomy practical examination questions in year 2 (Y2) and year 3 (Y3) cohorts were 3.9% (P < 0.001, effect size d = -0.32) and 0.3% lower, respectively, with DAVR available compared to previous years. However, there were positive correlations between students' cadaveric anatomy question scores with the number and total time of DAVR viewed (Y2, r = 0.171, 0.090, P = 0.002, n.s., respectively; and Y3, r = 0.257, 0.253, both P < 0.001). Students accessing all DAVR scored 7.2% and 11.8% higher than those accessing none (Y2, P = 0.015, d = 0.48; and Y3, P = 0.005, d = 0.77, respectively). Further development and promotion of DAVR are needed to improve engagement and learning outcomes of more students. Anat Sci Educ 9: 545-554. © 2016 American Association of Anatomists.

  18. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    PubMed

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience.

  19. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    PubMed

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.

  20. Problem Order Implications for Learning

    ERIC Educational Resources Information Center

    Li, Nan; Cohen, William W.; Koedinger, Kenneth R.

    2013-01-01

    The order of problems presented to students is an important variable that affects learning effectiveness. Previous studies have shown that solving problems in a blocked order, in which all problems of one type are completed before the student is switched to the next problem type, results in less effective performance than does solving the problems…

  1. The Audio-Visual Man.

    ERIC Educational Resources Information Center

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  2. The Planning and Management of Audio-Visual Media in Distance Learning Institutions. Final Report of an IIEP Workshop (Paris, France, September 30-October 3, 1980).

    ERIC Educational Resources Information Center

    Bates, A. W.

    Resulting from a 1980 workshop and a survey of 12 selected distance learning systems (or correspondence study programs), this paper had four aims: (1) to provide a framework to describe distance learning systems using audiovisual media and to locate the 12 surveyed institutions within that framework, (2) to identify common problem areas in the…

  3. The Impact of Audiovisual Feedback on the Learning Outcomes of a Remote and Virtual Laboratory Class

    ERIC Educational Resources Information Center

    Lindsay, E.; Good, M.

    2009-01-01

    Remote and virtual laboratory classes are an increasingly prevalent alternative to traditional hands-on laboratory experiences. One of the key issues with these modes of access is the provision of adequate audiovisual (AV) feedback to the user, which can be a complicated and resource-intensive challenge. This paper reports on a comparison of two…

  4. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiologist, 1976

    1976-01-01

    Reviewed is an eight module course in respiratory physiology that utilizes audiovisual cassettes and tapes. The topics include the lung, ventilation, blood flow, and breathing. It is rated excellent in content and quality. (SL)

  5. Adult Learning Strategies and Approaches (ALSA). Resources for Teachers of Adults. A Handbook of Practical Advice on Audio-Visual Aids and Educational Technology for Tutors and Organisers.

    ERIC Educational Resources Information Center

    Cummins, John; And Others

    This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…

  6. Audiovisual Materials.

    ERIC Educational Resources Information Center

    American Council on Education, Washington, DC. HEATH/Closer Look Resource Center.

    The fact sheet presents a suggested evaluation framework for use in previewing audiovisual materials, a list of selected resources, and an annotated list of films which were shown at the AHSSPPE '83 Media Fair as part of the national conference of the Association on Handicapped Student Service Programs in Postsecondary Education. Evaluation…

  7. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  8. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  9. Enhanced Multisensory Integration and Motor Reactivation after Active Motor Learning of Audiovisual Associations

    ERIC Educational Resources Information Center

    Butler, Andrew J.; James, Thomas W.; James, Karin Harman

    2011-01-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…

  10. Audiovisual Interaction

    NASA Astrophysics Data System (ADS)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  11. Aspects of Audio-Visual Training in a College of Education, with Special Reference to Radio Learning and Teaching

    ERIC Educational Resources Information Center

    Spires, Norman S.

    1974-01-01

    Article comments on the present needs of teachers in training where audiovisual matters, including radio broadcasting, are concerned and outlines the way in which such training takes place at Southlands College and the objectives sought. (Author)

  12. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  13. Learning with Hyperlinked Videos--Design Criteria and Efficient Strategies for Using Audiovisual Hypermedia

    ERIC Educational Resources Information Center

    Zahn, Carmen; Barquero, Beatriz; Schwan, Stephan

    2004-01-01

    In this article, we discuss the results of an experiment in which we studied two apparently conflicting classes of design principles for instructional hypervideos: (1) those principles derived from work on multimedia learning that emphasize spatio-temporal contiguity and (2) those originating from work on hypermedia learning that favour…

  14. Effects of Audiovisual Stimuli on Learning through Microcomputer-Based Class Presentation.

    ERIC Educational Resources Information Center

    Hativa, Nira; Reingold, Aliza

    1987-01-01

    Effectiveness of two versions of computer software used as an electronic blackboard to present geometric concepts to ninth grade students was compared. The experimental version incorporated color, animation, and nonverbal sounds as stimuli; the no-stimulus version was monochrome. Both immediate and delayed learning were significantly better for…

  15. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  16. Role of Audio and Audio-Visual Materials in Enhancing the Learning Process of Health Science Personnel.

    ERIC Educational Resources Information Center

    Cooper, William

    The material presented here is the result of a review of the Technical Development Plan of the National Library of Medicine, made with the object of describing the role of audiovisual materials in medical education, research and service, and particularly in the continuing education of physicians and allied health personnel. A historical background…

  17. Virtual Attendance: Analysis of an Audiovisual over IP System for Distance Learning in the Spanish Open University (UNED)

    ERIC Educational Resources Information Center

    Vazquez-Cano, Esteban; Fombona, Javier; Fernandez, Alberto

    2013-01-01

    This article analyzes a system of virtual attendance, called "AVIP" (AudioVisual over Internet Protocol), at the Spanish Open University (UNED) in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the…

  18. Making and Using Audiovisuals.

    ERIC Educational Resources Information Center

    Kernan, Margaret; And Others

    1991-01-01

    Includes nine articles that discuss audiovisuals in junior and senior high school libraries. Highlights include skills that various media require and foster; teaching students how to make effective audiovisuals; film production; state media contests; library orientation videos; slide-tape shows; photographic skills; and the use of audiovisuals to…

  19. Audiovisual Speech Recalibration in Children

    ERIC Educational Resources Information Center

    van Linden, Sabine; Vroomen, Jean

    2008-01-01

    In order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on…

  20. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  1. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  2. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  3. Audiovisual Mass Media and Education. TTW 27/28.

    ERIC Educational Resources Information Center

    van Stapele, Peter, Ed.; Sutton, Clifford C., Ed.

    1989-01-01

    The 15 articles in this special issue focus on learning about the audiovisual mass media and education, especially television and film, in relation to various pedagogical and didactical questions. Individual articles are: (1) "Audiovisual Mass Media for Education in Pakistan: Problems and Prospects" (Ahmed Noor Kahn); (2) "The Role of the…

  4. Implicit learning of fifth- and sixth-order sequential probabilities.

    PubMed

    Remillard, Gilbert

    2010-10-01

    Serial reaction time (SRT) task studies have established that people can implicitly learn sequential contingencies as complex as fourth-order probabilities. The present study examined people's ability to learn fifth-order (Experiment 1) and sixth-order (Experiment 2) probabilities. Remarkably, people learned fifth- and sixth-order probabilities. This suggests that the implicit sequence learning mechanism can operate over a range of at least seven sequence elements.

  5. Audiovisual Equipment Self Instruction Manual. Third Edition.

    ERIC Educational Resources Information Center

    Oates, Stanton C.

    An audiovisual equipment manual provides both the means of learning how to operate equipment and information needed to adjust equipment that is not performing properly. The manual covers the basic principles of operation for filmstrip-slide projectors, motion picture projectors, opaque projectors, overhead projectors, portable screens, record…

  6. Audio/Visual Ratios in Commercial Filmstrips.

    ERIC Educational Resources Information Center

    Gulliford, Nancy L.

    Developed by the Westinghouse Electric Corporation, Video Audio Compressed (VIDAC) is a compressed time, variable rate, still picture television system. This technology made it possible for a centralized library of audiovisual materials to be transmitted over a television channel in very short periods of time. In order to establish specifications…

  7. Promoting Higher Order Thinking Skills Using Inquiry-Based Learning

    ERIC Educational Resources Information Center

    Madhuri, G. V.; Kantamreddi, V. S. S. N; Prakash Goteti, L. N. S.

    2012-01-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in…

  8. An audiovisual emotion recognition system

    NASA Astrophysics Data System (ADS)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  9. Variable Affix Order: Grammar and Learning

    ERIC Educational Resources Information Center

    Ryan, Kevin M.

    2010-01-01

    While affix ordering often reflects general syntactic or semantic principles, it can also be arbitrary or variable. This article develops a theory of morpheme ordering based on local morphotactic restrictions encoded as weighted bigram constraints. I examine the formal properties of morphotactic systems, including arbitrariness, nontransitivity,…

  10. Time and Order Effects on Causal Learning

    ERIC Educational Resources Information Center

    Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.

    2006-01-01

    Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…

  11. AUDIOVISUAL SERVICES CATALOG.

    ERIC Educational Resources Information Center

    Stockton Unified School District, CA.

    A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…

  12. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  13. [Appraisal of Audiovisual Materials.

    ERIC Educational Resources Information Center

    Johnson, Steve

    This document consists of four separate handouts all related to the appraisal of audiovisual (AV) materials: "How to Work with an Appraiser of AV Media: A Convenient Check List for Clients and Their Advisors," helps a client prepare for an appraisal, explaining what is necessary before the appraisal, the appraisal process and its costs,…

  14. Selected Mental Health Audiovisuals.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  15. AUDIOVISUAL EQUIPMENT STANDARDS.

    ERIC Educational Resources Information Center

    PATTERSON, PIERCE E.; AND OTHERS

    RECOMMENDED STANDARDS FOR AUDIOVISUAL EQUIPMENT WERE PRESENTED SEPARATELY FOR GRADES KINDERGARTEN THROUGH SIX, AND FOR JUNIOR AND SENIOR HIGH SCHOOLS. THE ELEMENTARY SCHOOL EQUIPMENT CONSIDERED WAS THE FOLLOWING--CLASSROOM LIGHT CONTROL, MOTION PICTURE PROJECTOR WITH MOBILE STAND AND SPARE REELS, COMBINATION 2 INCH X 2 INCH SLIDE AND FILMSTRIP…

  16. Audiovisuals in Mental Health.

    ERIC Educational Resources Information Center

    Kenney, Brigitte L.

    1982-01-01

    Describes major uses of film, television, and video in mental health field and discusses problems in selection, acquisition, cataloging, indexing, storage, transfer, care of tapes, patients' rights, and copyright. A sample patient consent form for media recording, borrower's evaluation sheet, sources of audiovisuals and reviews, and 35 references…

  17. Order or Disorder? Impaired Hebb Learning in Dyslexia

    ERIC Educational Resources Information Center

    Szmalec, Arnaud; Loncke, Maaike; Page, Mike P. A.; Duyck, Wouter

    2011-01-01

    The present study offers an integrative account proposing that dyslexia and its various associated cognitive impairments reflect an underlying deficit in the long-term learning of serial-order information, here operationalized as Hebb repetition learning. In nondyslexic individuals, improved immediate serial recall is typically observed when one…

  18. Mobilising Concepts: Intellectual Technologies in the Ordering of Learning Societies

    ERIC Educational Resources Information Center

    Edwards, Richard

    2004-01-01

    Lifelong learning and a learning society are important planks of European Union (EU) policy. Drawing upon the work of Foucault and Rose, this article examines some of the intellectual technologies that are deployed in the ordering of these policy goals. It argues that research is one such technology and examines EU Framework Projects to explore…

  19. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    ERIC Educational Resources Information Center

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  20. Audiovisual Materials for Teaching Economics.

    ERIC Educational Resources Information Center

    Kronish, Sidney J.

    The Audiovisual Materials Evaluation Committee prepared this report to guide elementary and secondary teachers in their selection of supplementary economic education audiovisual materials. It updates a 1969 publication by adding 107 items to the original guide. Materials included in this report: (1) contain elements of economic analysis--facts,…

  1. Learning in higher order Boltzmann machines using linear response.

    PubMed

    Leisink, M A; Kappen, H J

    2000-04-01

    We introduce an efficient method for learning and inference in higher order Boltzmann machines. The method is based on mean field theory with the linear response correction. We compute the correlations using the exact and the approximated method for a fully connected third order network of ten neurons. In addition, we compare the results of the exact and approximate learning algorithm. Finally we use the presented method to solve the shifter problem. We conclude that the linear response approximation gives good results as long as the couplings are not too large.

  2. Improving physician practice efficiency by learning lab test ordering pattern.

    PubMed

    Cai, Peng; Cao, Feng; Ni, Yuan; Shen, Weijia; Zheng, Tao

    2013-01-01

    The system of electronic medical records (EMR) has been widely used in physician practice. In China, physicians have the time pressure to provide care to many patients in a short period. Improving practice efficiency is a promising direction to mitigate this predicament. During the encounter, ordering lab test is one of the most frequent actions in EMR system. In this paper, our motivation is to save physician's time by providing lab test ordering list to facilitate physician practice. To this end, we developed weight based multi-label classification framework to learn to order lab test for the current encounter according to the historical EMR. Particularly, we propose to learn the physician-specific lab test ordering pattern as different physicians may have different practice behavior on the same population. Experimental results on the real data set demonstrate that physician-specific models can outperform the baseline.

  3. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  4. Promoting higher order thinking skills using inquiry-based learning

    NASA Astrophysics Data System (ADS)

    Madhuri, G. V.; S. S. N Kantamreddi, V.; Goteti, L. N. S. Prakash

    2012-05-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in chemistry. Laboratory exercises are designed based on Bloom's taxonomy and a just-in-time facilitation approach is used. A pre-laboratory discussion outlining the theory of the experiment and its relevance is carried out to enable the students to analyse real-life problems. The performance of the students is assessed based on their ability to perform the experiment, design new experiments and correlate practical utility of the course module with real life. The novelty of the present approach lies in the fact that the learning outcomes of the existing experiments are achieved through establishing a relationship with real-world problems.

  5. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  6. School Building Design and Audio-Visual Resources.

    ERIC Educational Resources Information Center

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  7. Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony.

    PubMed

    Petrini, Karin; Dahl, Sofia; Rocchesso, Davide; Waadeland, Carl Haakon; Avanzini, Federico; Puce, Aina; Pollick, Frank E

    2009-09-01

    We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.

  8. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration.

  9. Computerized physician order entry: lessons learned from the trenches.

    PubMed

    Ramirez, Anne; Carlson, Debra; Estes, Carey

    2010-01-01

    Implementation of computer physician order entry (CPOE) demands planning, teamwork, and a steep learning curve. The nurse-driven team at the hospital unit level is pivotal to a successful launch. This article describes the experience of one NICU in planning, building, training, and implementing CPOE. Pitfalls and lessons learned are described. Communication between the nurse team at the unit and the clinical informatics team needs to be ongoing. Self-paced training with realistic practice scenarios and one-on-one "view then practice" modules help ease the transition. Many issues are not apparent until after CPOE has been implemented, and it is vital to have a mechanism to fix problems quickly. We describe the experience of "going live" and the reality of day-to-day order entry.

  10. Predicting perceptual learning from higher-order cortical processing.

    PubMed

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making.

  11. Machine learning using a higher order correlation network

    SciTech Connect

    Lee, Y.C.; Doolen, G.; Chen, H.H.; Sun, G.Z.; Maxwell, T.; Lee, H.Y.

    1986-01-01

    A high-order correlation tensor formalism for neural networks is described. The model can simulate auto associative, heteroassociative, as well as multiassociative memory. For the autoassociative model, simulation results show a drastic increase in the memory capacity and speed over that of the standard Hopfield-like correlation matrix methods. The possibility of using multiassociative memory for a learning universal inference network is also discussed. 9 refs., 5 figs.

  12. Appreciation of learning environment and development of higher-order learning skills in a problem-based learning medical curriculum.

    PubMed

    Mala-Maung; Abdullah, Azman; Abas, Zoraini W

    2011-12-01

    This cross-sectional study determined the appreciation of the learning environment and development of higher-order learning skills among students attending the Medical Curriculum at the International Medical University, Malaysia which provides traditional and e-learning resources with an emphasis on problem based learning (PBL) and self-directed learning. Of the 708 participants, the majority preferred traditional to e-resources. Students who highly appreciated PBL demonstrated a higher appreciation of e-resources. Appreciation of PBL is positively and significantly correlated with higher-order learning skills, reflecting the inculcation of self-directed learning traits. Implementers must be sensitive to the progress of learners adapting to the higher education environment and innovations, and to address limitations as relevant.

  13. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Kincses, Zsigmond Tamás

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction.

  14. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  15. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  16. Towards Postmodernist Television: INA's Audiovisual Magazine Programmes.

    ERIC Educational Resources Information Center

    Boyd-Bowman, Susan

    Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines,…

  17. Learn locally, think globally. Exemplar variability supports higher-order generalization and word learning.

    PubMed

    Perry, Lynn K; Samuelson, Larissa K; Malloy, Lisa M; Schiffer, Ryan N

    2010-12-01

    Research suggests that variability of exemplars supports successful object categorization; however, the scope of variability's support at the level of higher-order generalization remains unexplored. Using a longitudinal study, we examined the role of exemplar variability in first- and second-order generalization in the context of nominal-category learning at an early age. Sixteen 18-month-old children were taught 12 categories. Half of the children were taught with sets of highly similar exemplars; the other half were taught with sets of dissimilar, variable exemplars. Participants' learning and generalization of trained labels and their development of more general word-learning biases were tested. All children were found to have learned labels for trained exemplars, but children trained with variable exemplars generalized to novel exemplars of these categories, developed a discriminating word-learning bias generalizing labels of novel solid objects by shape and labels of nonsolid objects by material, and accelerated in vocabulary acquisition. These findings demonstrate that object variability leads to better abstraction of individual and global category organization, which increases learning outside the laboratory.

  18. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  19. Audiovisual Resources for Instructional Development.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.; And Others

    Provided is a compilation of recently annotated audiovisual materials which present techniques, models, or other specific information that can aid in providing comprehensive services to the handicapped. Entries which include a brief description, name of distributor, technical information, and cost are presented alphabetically by title in eight…

  20. Preventive Maintenance Handbook. Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Educational Products Information Exchange Inst., Stony Brook, NY.

    The preventive maintenance system for audiovisual equipment presented in this handbook is designed by specialists so that it can be used by nonspecialists in school sites. The report offers specific advice on saftey factors and also lists major problems that should not be handled by nonspecialists. Other aspects of a preventive maintenance system…

  1. Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This list of audiovisual materials for environmental education was prepared by the State of Minnesota, Department of Education, Division of Instruction, to accompany the pilot curriculum in environmental education. The majority of the materials listed are available from the University of Minnesota, or from state or federal agencies. The…

  2. Encouraging Higher-Order Thinking in General Chemistry by Scaffolding Student Learning Using Marzano's Taxonomy

    ERIC Educational Resources Information Center

    Toledo, Santiago; Dubas, Justin M.

    2016-01-01

    An emphasis on higher-order thinking within the curriculum has been a subject of interest in the chemical and STEM literature due to its ability to promote meaningful, transferable learning in students. The systematic use of learning taxonomies could be a practical way to scaffold student learning in order to achieve this goal. This work proposes…

  3. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    ERIC Educational Resources Information Center

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  4. Bilingualism affects audiovisual phoneme identification.

    PubMed

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  5. Bilingualism affects audiovisual phoneme identification

    PubMed Central

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience—i.e., the exposure to a double phonological code during childhood—affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically “deaf” and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech. PMID:25374551

  6. Improved Computer-Aided Instruction by the Use of Interfaced Random-Access Audio-Visual Equipment. Report on Research Project No. P/24/1.

    ERIC Educational Resources Information Center

    Bryce, C. F. A.; Stewart, A. M.

    A brief review of the characteristics of computer assisted instruction and the attributes of audiovisual media introduces this report on a project designed to improve the effectiveness of computer assisted learning through the incorporation of audiovisual materials. A discussion of the implications of research findings on the design and layout of…

  7. Learning in Order To Teach in Chicxulub Puerto, Yucatan, Mexico.

    ERIC Educational Resources Information Center

    Wilber, Cynthia J.

    2000-01-01

    Describes a community-based computer education program for the young people (and adults) of Chicxulub Puerto, a small fishing village in Yucatan, Mexico. Notes the children learn Maya, Spanish, and English in the context of learning computer and telecommunication skills. Concludes that access to the Internet has made a profound difference in a…

  8. Flipping & Clicking Your Way to Higher-Order Learning

    ERIC Educational Resources Information Center

    Garver, Michael S.; Roberts, Brian A.

    2013-01-01

    This innovative system of teaching and learning includes the implementation of two effective learning technologies: podcasting ("flipping") and classroom response systems ("clicking"). Students watch lectures in podcast format before coming to class, which allows the "entire" class period to be devoted to active…

  9. Language learning: how much evidence does a child need in order to learn to speak grammatically?

    PubMed

    Page, Karen M

    2004-07-01

    In order to learn grammar from a finite amount of evidence, children must begin with in-built expectations of what is grammatical. They clearly are not born, however, with fully developed grammars. Thus early language development involves refinement of the grammar hypothesis until a target grammar is learnt. Here we address the question of how much evidence is required for this refinement process, by considering two standard learning algorithms and a third algorithm which is presumably as efficient as a child for some value of its memory capacity. We reformulate this algorithm in the context of Chomsky's 'principles and parameters' and show that it is possible to bound the amount of evidence required to almost certainly speak almost grammatically.

  10. Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm

    PubMed Central

    Wang, Jinzhao

    2016-01-01

    We present a novel algorithm for optimizing the order in which Chinese characters are learned, one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships. We show that our work outperforms previously published orders and algorithms. Our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order. PMID:27706234

  11. AUDIO-VISUAL INSTRUCTION, AN ADMINISTRATIVE HANDBOOK.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Education, Jefferson City.

    THIS HANDBOOK WAS DESIGNED FOR USE BY SCHOOL ADMINISTRATORS IN DEVELOPING A TOTAL AUDIOVISUAL (AV) PROGRAM. ATTENTION IS GIVEN TO THE IMPORTANCE OF AUDIOVISUAL MEDIA TO EFFECTIVE INSTRUCTION, ADMINISTRATIVE PERSONNEL REQUIREMENTS FOR AN AV PROGRAM, BUDGETING FOR AV INSTRUCTION, PROPER UTILIZATION OF AV MATERIALS, SELECTION OF AV EQUIPMENT AND…

  12. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping.

  13. Solar Energy Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.

    This directory presents an annotated bibliography of non-print information resources dealing with solar energy. The document is divided by type of audio-visual medium, including: (1) Films, (2) Slides and Filmstrips, and (3) Videotapes. A fourth section provides addresses and telephone numbers of audiovisual aids sources, and lists the page…

  14. A Guide for Audiovisual and Newer Media.

    ERIC Educational Resources Information Center

    Carr, William D.

    One of the principal values of audiovisual materials is that they permit the teacher to depart from verbal and printed symbolism, and at the same time to provide a wider real or vicarious experience for pupils. This booklet is designed to aid the teacher in using audiovisual material effectively. It covers visual displays, non-projected materials,…

  15. Catalog of Audiovisual Materials Related to Rehabilitation.

    ERIC Educational Resources Information Center

    Mann, Joe, Ed.; Henderson, Jim, Ed.

    An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…

  16. Audiovisual Temporal Processing and Synchrony Perception in the Rat

    PubMed Central

    Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.

    2017-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately

  17. Is order the defining feature of magnitude representation? An ERP study on learning numerical magnitude and spatial order of artificial symbols.

    PubMed

    Zhao, Hui; Chen, Chuansheng; Zhang, Hongchuan; Zhou, Xinlin; Mei, Leilei; Chen, Chunhui; Chen, Lan; Cao, Zhongyu; Dong, Qi

    2012-01-01

    Using an artificial-number learning paradigm and the ERP technique, the present study investigated neural mechanisms involved in the learning of magnitude and spatial order. 54 college students were divided into 2 groups matched in age, gender, and school major. One group was asked to learn the associations between magnitude (dot patterns) and the meaningless Gibson symbols, and the other group learned the associations between spatial order (horizontal positions on the screen) and the same set of symbols. Results revealed differentiated neural mechanisms underlying the learning processes of symbolic magnitude and spatial order. Compared to magnitude learning, spatial-order learning showed a later and reversed distance effect. Furthermore, an analysis of the order-priming effect showed that order was not inherent to the learning of magnitude. Results of this study showed a dissociation between magnitude and order, which supports the numerosity code hypothesis of mental representations of magnitude.

  18. Audio-visual gender recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  19. Conceptual Similarity Promotes Generalization of Higher Order Fear Learning

    ERIC Educational Resources Information Center

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to…

  20. High-order feature-based mixture models of classification learning predict individual learning curves and enable personalized teaching.

    PubMed

    Cohen, Yarden; Schneidman, Elad

    2013-01-08

    Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects' priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning.

  1. High-order feature-based mixture models of classification learning predict individual learning curves and enable personalized teaching

    PubMed Central

    Cohen, Yarden; Schneidman, Elad

    2013-01-01

    Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects’ priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning. PMID:23269833

  2. Attention rivalry under irrelevant audiovisual stimulation.

    PubMed

    Feng, Ting; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2008-06-13

    Audiovisual integration has been known to enhance perception; nevertheless, another fundamental audiovisual interaction, i.e. attention rivalry, has not been well investigated. This paper studied the attention rivalry under irrelevant audiovisual stimulation using event-related potential (ERP) and behavioral analysis, and tested the existence of a vision dominated rivalry model. Participants need respond to the target in a bi- or unimodal audiovisual stimulation paradigm. The enhanced amplitude of central P300 under visual target bimodal stimulus indicated that vision demanded more cognitive resources, and the significant amplitude of frontal P200 under bimodal stimulus with non-target auditory stimulus implied that the brain mostly restrained the process of the non-target auditory information. ERP results, together with the analysis of the behavioral data and the subtraction waves, indicated a vision dominated attention rivalry model involved in audiovisual interaction. Furthermore, the latencies of P200 and P300 components implied that audiovisual attention rivalry occurred within the first 300ms after stimulus onset, i.e. significant differences were found in P200 latencies among three target bimodal stimuli, while no difference existed in P300 latencies. Attention shifting and re-directing might be the cause of such early audiovisual rivalry.

  3. A quantitative dynamical systems approach to differential learning: self-organization principle and order parameter equations.

    PubMed

    Frank, T D; Michelbrink, M; Beckmann, H; Schöllhorn, W I

    2008-01-01

    Differential learning is a learning concept that assists subjects to find individual optimal performance patterns for given complex motor skills. To this end, training is provided in terms of noisy training sessions that feature a large variety of between-exercises differences. In several previous experimental studies it has been shown that performance improvement due to differential learning is higher than due to traditional learning and performance improvement due to differential learning occurs even during post-training periods. In this study we develop a quantitative dynamical systems approach to differential learning. Accordingly, differential learning is regarded as a self-organized process that results in the emergence of subject- and context-dependent attractors. These attractors emerge due to noise-induced bifurcations involving order parameters in terms of learning rates. In contrast, traditional learning is regarded as an externally driven process that results in the emergence of environmentally specified attractors. Performance improvement during post-training periods is explained as an hysteresis effect. An order parameter equation for differential learning involving a fourth-order polynomial potential is discussed explicitly. New predictions concerning the relationship between traditional and differential learning are derived.

  4. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    ERIC Educational Resources Information Center

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  5. Audiovisual Materials and Techniques for Teaching Foreign Languages: Recent Trends and Activities.

    ERIC Educational Resources Information Center

    Parks, Carolyn

    Recent experimentation with audio-visual (A-V) materials has provided insight into the language learning process. Researchers and teachers alike have recognized the importance of using A-V materials to achieve goals related to meaningful and relevant communication, retention and recall of language items, non-verbal aspects of communication, and…

  6. No rapid audiovisual recalibration in adults on the autism spectrum

    PubMed Central

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  7. Role of audiovisual synchrony in driving head orienting responses.

    PubMed

    Ho, Cristy; Gray, Rob; Spence, Charles

    2013-06-01

    Many studies now suggest that optimal multisensory integration sometimes occurs under conditions where auditory and visual stimuli are presented asynchronously (i.e. at asynchronies of 100 ms or more). Such observations lead to the suggestion that participants' speeded orienting responses might be enhanced following the presentation of asynchronous (as compared to synchronous) peripheral audiovisual spatial cues. Here, we report a series of three experiments designed to investigate this issue. Upon establishing the effectiveness of bimodal cuing over the best of its unimodal components (Experiment 1), participants had to make speeded head-turning or steering (wheel-turning) responses toward the cued direction (Experiment 2), or an incompatible response away from the cue (Experiment 3), in response to random peripheral audiovisual stimuli presented at stimulus onset asynchronies ranging from -100 to 100 ms. Race model inequality analysis of the results (Experiment 1) revealed different mechanisms underlying the observed multisensory facilitation of participants' head-turning versus steering responses. In Experiments 2 and 3, the synchronous presentation of the component auditory and visual cues gave rise to the largest facilitation of participants' response latencies. Intriguingly, when the participants had to subjectively judge the simultaneity of the audiovisual stimuli, the point of subjective simultaneity occurred when the auditory stimulus lagged behind the visual stimulus by 22 ms. Taken together, these results appear to suggest that the maximally beneficial behavioural (head and manual) orienting responses resulting from peripherally presented audiovisual stimuli occur when the component signals are presented in synchrony. These findings suggest that while the brain uses precise temporal synchrony in order to control its orienting responses, the system that the human brain uses to consciously judge synchrony appears to be less fine tuned.

  8. Teacher Change in a Changing Moral Order: Learning from Durkheim

    ERIC Educational Resources Information Center

    Slonimsky, Lynne

    2016-01-01

    This paper explores a curriculum paradox that may arise for teachers in post-authoritarian regimes if a radically new curriculum, designed to prepare learners for democratic citizenship, requires them to be autonomous professionals. If teachers were originally schooled and trained under the old regime to follow the orders inscribed in syllabi and…

  9. Audio-visual simultaneity judgments.

    PubMed

    Zampini, Massimiliano; Guest, Steve; Shore, David I; Spence, Charles

    2005-04-01

    The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.

  10. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  11. Toddlers infer higher-order relational principles in causal learning.

    PubMed

    Walker, Caren M; Gopnik, Alison

    2014-01-01

    Children make inductive inferences about the causal properties of individual objects from a very young age. When can they infer higher-order relational properties? In three experiments, we examined 18- to 30-month-olds' relational inferences in a causal task. Results suggest that at this age, children are able to infer a higher-order relational causal principle from just a few observations and use this inference to guide their own subsequent actions and bring about a novel causal outcome. Moreover, the children passed a revised version of the relational match-to-sample task that has proven very difficult for nonhuman primates. The findings are considered in light of their implications for understanding the nature of relational and causal reasoning, and their evolutionary origins.

  12. U.S. Government Films, 1971 Supplement; A Catalog of Audiovisual Materials for Rent and Sale by the National Audiovisual Center.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    The first edition of the National Audiovisual Center sales catalog (LI 003875) is updated by this supplement. Changes in price and order number as well as deletions from the 1969 edition, are noted in this 1971 version. Purchase and rental information for the sound films and silent filmstrips is provided. The broad subject categories are:…

  13. Beyond Course Availability: An Investigation into Order and Concurrency Effects of Undergraduate Programming Courses on Learning.

    ERIC Educational Resources Information Center

    Urbaczewski, Andrew; Urbaczewski, Lise

    The objective of this study was to find the answers to two primary research questions: "Do students learn programming languages better when they are offered in a particular order, such as 4th generation languages before 3rd generation languages?"; and "Do students learn programming languages better when they are taken in separate semesters as…

  14. Strategic Learning in Youth with Traumatic Brain Injury: Evidence for Stall in Higher-Order Cognition

    ERIC Educational Resources Information Center

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Cook, Lori G.

    2009-01-01

    Little is known about strategic learning ability in preteens and adolescents with traumatic brain injury (TBI). Strategic learning is the ability to combine and synthesize details to form abstracted gist-based meanings, a higher-order cognitive skill associated with frontal lobe functions and higher classroom performance. Summarization tasks were…

  15. Multilabel image classification via high-order label correlation driven active learning.

    PubMed

    Zhang, Bang; Wang, Yang; Chen, Fang

    2014-03-01

    Supervised machine learning techniques have been applied to multilabel image classification problems with tremendous success. Despite disparate learning mechanisms, their performances heavily rely on the quality of training images. However, the acquisition of training images requires significant efforts from human annotators. This hinders the applications of supervised learning techniques to large scale problems. In this paper, we propose a high-order label correlation driven active learning (HoAL) approach that allows the iterative learning algorithm itself to select the informative example-label pairs from which it learns so as to learn an accurate classifier with less annotation efforts. Four crucial issues are considered by the proposed HoAL: 1) unlike binary cases, the selection granularity for multilabel active learning need to be fined from example to example-label pair; 2) different labels are seldom independent, and label correlations provide critical information for efficient learning; 3) in addition to pair-wise label correlations, high-order label correlations are also informative for multilabel active learning; and 4) since the number of label combinations increases exponentially with respect to the number of labels, an efficient mining method is required to discover informative label correlations. The proposed approach is tested on public data sets, and the empirical results demonstrate its effectiveness.

  16. SUMMARY REPORT ON THE LAKE OKOBOJI AUDIOVISUAL LEADERSHIP CONFERENCE (10TH, MILFORD, IOWA, AUGUST 16-20, 1964).

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    THIS IS A SERIES OF WORKING PAPERS AIMED AT AUDIOVISUAL SPECIALISTS. THE KEYNOTE ADDRESS, COMMITTEE REPORTS, AND CONFERENCE SUMMARY CONCERN LEARNING SPACE AND EDUCATIONAL MEDIA IN INSTRUCTIONAL PROGRAMS. REPORTS DEAL WITH A BEHAVIORAL ANALYSIS APPROACH TO CURRICULUM AND SPACE CONSIDERATIONS, SOURCES OF INFORMATION AND RESEARCH ON LEARNING SPACE,…

  17. Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.

    PubMed

    Remillard, Gilbert

    2011-07-01

    There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.

  18. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    PubMed

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.

  19. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  20. An Investigation of Higher-Order Thinking Skills in Smaller Learning Community Social Studies Classrooms

    ERIC Educational Resources Information Center

    Fischer, Christopher; Bol, Linda; Pribesh, Shana

    2011-01-01

    This study investigated the extent to which higher-order thinking skills are promoted in social studies classes in high schools that are implementing smaller learning communities (SLCs). Data collection in this mixed-methods study included classroom observations and in-depth interviews. Findings indicated that higher-order thinking was rarely…

  1. Govt. Pubs: U.S. Government Produced Audiovisual Materials.

    ERIC Educational Resources Information Center

    Korman, Richard

    1981-01-01

    Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…

  2. Guidelines for Audio-Visual Services in Academic Libraries.

    ERIC Educational Resources Information Center

    Association of Coll. and Research Libraries, Chicago, IL.

    The purpose of these guidelines, prepared by the Audio-Visual Committee of the Association of College and Research Libraries, is to supply basic assistance to those academic libraries that will assume all or a major portion of an audio-visual program. They attempt to assist librarians to recognize and develop their audio-visual responsibilities…

  3. The Effects of an Audio-Visual Training Program in Dyslexic Children

    ERIC Educational Resources Information Center

    Magnan, Annie; Ecalle, Jean; Veuillet, Evelyne; Collet, Lionel

    2004-01-01

    A research project was conducted in order to investigate the usefulness of intensive audio-visual training administered to children with dyslexia involving daily voicing exercises. In this study, the children received such voicing training (experimental group) for 30 min a day, 4 days a week, over 5 weeks. They were assessed on a reading task…

  4. Learning Partnership: Students and Faculty Learning Together to Facilitate Reflection and Higher Order Thinking in a Blended Course

    ERIC Educational Resources Information Center

    McDonald, Paige L.; Straker, Howard O.; Schlumpf, Karen S.; Plack, Margaret M.

    2014-01-01

    This article discusses a learning partnership among faculty and students to influence reflective practice in a blended course. Faculty redesigned a traditional face-to-face (FTF) introductory physician assistant course into a blended course to promote increased reflection and higher order thinking. Early student reflective writing suggested a need…

  5. Serial-order learning impairment and hypersensitivity-to-interference in dyscalculia.

    PubMed

    De Visscher, Alice; Szmalec, Arnaud; Van Der Linden, Lize; Noël, Marie-Pascale

    2015-11-01

    In the context of heterogeneity, the different profiles of dyscalculia are still hypothetical. This study aims to link features of mathematical difficulties to certain potential etiologies. First, we wanted to test the hypothesis of a serial-order learning deficit in adults with dyscalculia. For this purpose we used a Hebb repetition learning task. Second, we wanted to explore a recent hypothesis according to which hypersensitivity-to-interference hampers the storage of arithmetic facts and leads to a particular profile of dyscalculia. We therefore used interfering and non-interfering repeated sequences in the Hebb paradigm. A final test was used to assess the memory trace of the non-interfering sequence and the capacity to manipulate it. In line with our predictions, we observed that people with dyscalculia who show good conceptual knowledge in mathematics but impaired arithmetic fluency suffer from increased sensitivity-to-interference compared to controls. Secondly, people with dyscalculia who show a deficit in a global mathematical test suffer from a serial-order learning deficit characterized by a slow learning and a quick degradation of the memory trace of the repeated sequence. A serial-order learning impairment could be one of the explanations for a basic numerical deficit, since it is necessary for the number-word sequence acquisition. Among the different profiles of dyscalculia, this study provides new evidence and refinement for two particular profiles.

  6. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  7. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  8. Audiovisual Instruction in Pediatric Pharmacy Practice.

    ERIC Educational Resources Information Center

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  9. Active Methodology in the Audiovisual Communication Degree

    ERIC Educational Resources Information Center

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  10. A Selection of Audiovisual Materials on Disabilities.

    ERIC Educational Resources Information Center

    Mayo, Kathleen; Rider, Sheila

    Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…

  11. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  12. Audiovisual Facilities in Schools in Japan Today.

    ERIC Educational Resources Information Center

    Ministry of Education, Tokyo (Japan).

    This paper summarizes the findings of a national survey conducted for the Ministry of Education, Science, and Culture in 1986 to determine the kinds of audiovisual equipment available in Japanese schools, together with the rate of diffusion for the various types of equipment, the amount of teacher participation in training for their use, and the…

  13. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution

    PubMed Central

    Phillips, Steven; Wilson, William H.

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other “structurally related” cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  14. Effect of Visual Scaffolding and Animation on Students? Performance on Measures of Higher Order Learning

    ERIC Educational Resources Information Center

    Kidwai, Khusro; Munyofu, Mine; Swain, William J; Ausman, Bradley D.; Lin, Huifen; Dwyer, Francis

    2001-01-01

    Animation is being used extensively for instructional purposes; however, it has not been found to be effective on measures of higher order learning (concepts, rules, procedures) within the knowledge acquisition and knowledge integration domains. The purpose of this study was to examine the instructional effectiveness of two visual scaffolding…

  15. Developing Student-Centered Learning Model to Improve High Order Mathematical Thinking Ability

    ERIC Educational Resources Information Center

    Saragih, Sahat; Napitupulu, Elvis

    2015-01-01

    The purpose of this research was to develop student-centered learning model aiming to improve high order mathematical thinking ability of junior high school students of based on curriculum 2013 in North Sumatera, Indonesia. The special purpose of this research was to analyze and to formulate the purpose of mathematics lesson in high order…

  16. Changes in Teaching in Order to Help Students with Learning Difficulties Improve in Cypriot Primary Classes

    ERIC Educational Resources Information Center

    Loizou, Florentia

    2016-01-01

    This article aims to explore what changes two Cypriot primary school teachers brought in their teaching in order to help students with learning difficulties improve in their classes. The study was qualitative and used non-participant observation in two primary classrooms in different primary schools and semi-structured interviews with the main…

  17. Learning and Generalization on Asynchrony and Order Tasks at Sound Offset: Implications for Underlying Neural Circuitry

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.

    2008-01-01

    Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…

  18. Higher-Order Thinking Development through Adaptive Problem-Based Learning

    ERIC Educational Resources Information Center

    Raiyn, Jamal; Tilchin, Oleg

    2015-01-01

    In this paper we propose an approach to organizing Adaptive Problem-Based Learning (PBL) leading to the development of Higher-Order Thinking (HOT) skills and collaborative skills in students. Adaptability of PBL is expressed by changes in fixed instructor assessments caused by the dynamics of developing HOT skills needed for problem solving,…

  19. Developing Higher Order Reading Comprehension Skills in the Learning Disabled Student: A Non-Basal Approach.

    ERIC Educational Resources Information Center

    Solomon, Sheila

    This practicum study evaluated a non-basal, multidisciplinary, multisensory approach to teaching higher order reading comprehension skills to eight fifth-grade learning-disabled students from low socioeconomic minority group backgrounds. The four comprehension skills were: (1) identifying the main idea; (2) determining cause and effect; (3) making…

  20. The Instructional Effectiveness of Random, Logical and Ordering Theory Generated Learning Hierarchies.

    ERIC Educational Resources Information Center

    Partin, Ronald L.

    The instructional effectiveness of learning programs derived from Gagne-type task analysis, ordering theory analysis, and random sequenced presentation of complex intellectual skills were investigated. Fifty-seven high school students completed a self-instructional program derived from one of the three sequences. No significant differences were…

  1. Assessment Choices to Target Higher Order Learning Outcomes: The Power of Academic Empowerment

    ERIC Educational Resources Information Center

    McNeill, Margot; Gosper, Maree; Xu, Jing

    2012-01-01

    Assessment of higher order learning outcomes such as critical thinking, problem solving and creativity has remained a challenge for universities. While newer technologies such as social networking tools have the potential to support these intended outcomes, academics' assessment practice is slow to change. University mission statements and unit…

  2. Linking memory and language: Evidence for a serial-order learning impairment in dyslexia.

    PubMed

    Bogaerts, Louisa; Szmalec, Arnaud; Hachmann, Wibke M; Page, Mike P A; Duyck, Wouter

    2015-01-01

    The present study investigated long-term serial-order learning impairments, operationalized as reduced Hebb repetition learning (HRL), in people with dyslexia. In a first multi-session experiment, we investigated both the persistence of a serial-order learning impairment as well as the long-term retention of serial-order representations, both in a group of Dutch-speaking adults with developmental dyslexia and in a matched control group. In a second experiment, we relied on the assumption that HRL mimics naturalistic word-form acquisition and we investigated the lexicalization of novel word-forms acquired through HRL. First, our results demonstrate that adults with dyslexia are fundamentally impaired in the long-term acquisition of serial-order information. Second, dyslexic and control participants show comparable retention of the long-term serial-order representations in memory over a period of 1 month. Third, the data suggest weaker lexicalization of newly acquired word-forms in the dyslexic group. We discuss the integration of these findings into current theoretical views of dyslexia.

  3. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  4. Information-Driven Active Audio-Visual Source Localization.

    PubMed

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

  5. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    SciTech Connect

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  6. Ordering and finding the best of K > 2 supervised learning algorithms.

    PubMed

    Yildiz, Olcay Taner; Alpaydin, Ethem

    2006-03-01

    Given a data set and a number of supervised learning algorithms, we would like to find the algorithm with the smallest expected error. Existing pairwise tests allow a comparison of two algorithms only; range tests and ANOVA check whether multiple algorithms have the same expected error and cannot be used for finding the smallest. We propose a methodology, the MultiTest algorithm, whereby we order supervised learning algorithms taking into account 1) the result of pairwise statistical tests on expected error (what the data tells us), and 2) our prior preferences, e.g., due to complexity. We define the problem in graph-theoretic terms and propose an algorithm to find the "best" learning algorithm in terms of these two criteria, or in the more general case, order learning algorithms in terms of their "goodness." Simulation results using five classification algorithms on 30 data sets indicate the utility of the method. Our proposed method can be generalized to regression and other loss functions by using a suitable pairwise test.

  7. Attributes of Quality in Audiovisual Materials for Health Professionals.

    ERIC Educational Resources Information Center

    Suter, Emanuel; Waddell, Wendy H.

    1981-01-01

    Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)

  8. Stuttering and speech naturalness: audio and audiovisual judgments.

    PubMed

    Martin, R R; Haroldson, S K

    1992-06-01

    Unsophisticated raters, using 9-point interval scales, judged speech naturalness and stuttering severity of recorded stutterer and nonstutterer speech samples. Raters judged separately the audio-only and audiovisual presentations of each sample. For speech naturalness judgments of stutterer samples, raters invariably judged the audiovisual presentation more unnatural than the audio presentation of the same sample; but for the nonstutterer samples, there was no difference between audio and audiovisual naturalness ratings. Stuttering severity ratings did not differ significantly between audio and audiovisual presentations of the same samples. Rater reliability, interrater agreement, and intrarater agreement for speech naturalness judgments were assessed.

  9. Cortical Integration of Audio-Visual Information

    PubMed Central

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  10. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention.

  11. Assessment of the learning curve from the California Verbal Learning Test-Children's Version with the first-order system transfer function.

    PubMed

    Stepanov, Igor I; Abramson, Charles I; Warschausky, Seth

    2011-01-01

    A mathematical model is proposed to measure the learning curve in the California Verbal Learning Test-Children's Version. The model is based on the first-order system transfer function in the form Y = B3*exp[-B2*(X-1)]+B4*{1-exp[-B2*(X-1)]}, where X is the trial number, Y is the number of recalled correct words, B2 is the learning rate, B3 is interpreted as readiness to learn and B4 as the ability to learn. Children's readiness to learn and ability to learn were lower than adults. Modeling revealed that girls had greater readiness to learn and ability to learn than boys.

  12. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals.

  13. Simulated and Virtual Science Laboratory Experiments: Improving Critical Thinking and Higher-Order Learning Skills

    NASA Astrophysics Data System (ADS)

    Simon, Nicole A.

    Virtual laboratory experiments using interactive computer simulations are not being employed as viable alternatives to laboratory science curriculum at extensive enough rates within higher education. Rote traditional lab experiments are currently the norm and are not addressing inquiry, Critical Thinking, and cognition throughout the laboratory experience, linking with educational technologies (Pyatt & Sims, 2007; 2011; Trundle & Bell, 2010). A causal-comparative quantitative study was conducted with 150 learners enrolled at a two-year community college, to determine the effects of simulation laboratory experiments on Higher-Order Learning, Critical Thinking Skills, and Cognitive Load. The treatment population used simulated experiments, while the non-treatment sections performed traditional expository experiments. A comparison was made using the Revised Two-Factor Study Process survey, Motivated Strategies for Learning Questionnaire, and the Scientific Attitude Inventory survey, using a Repeated Measures ANOVA test for treatment or non-treatment. A main effect of simulated laboratory experiments was found for both Higher-Order Learning, [F (1, 148) = 30.32,p = 0.00, eta2 = 0.12] and Critical Thinking Skills, [F (1, 148) = 14.64,p = 0.00, eta 2 = 0.17] such that simulations showed greater increases than traditional experiments. Post-lab treatment group self-reports indicated increased marginal means (+4.86) in Higher-Order Learning and Critical Thinking Skills, compared to the non-treatment group (+4.71). Simulations also improved the scientific skills and mastery of basic scientific subject matter. It is recommended that additional research recognize that learners' Critical Thinking Skills change due to different instructional methodologies that occur throughout a semester.

  14. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  15. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed Central

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  16. Word sense disambiguation via high order of learning in complex networks

    NASA Astrophysics Data System (ADS)

    Silva, Thiago C.; Amancio, Diego R.

    2012-06-01

    Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model.

  17. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-10-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  18. Cerebellar contribution to higher and lower order rule learning and cognitive flexibility in mice.

    PubMed

    Dickson, P E; Cairns, J; Goldowitz, D; Mittleman, G

    2017-03-14

    Cognitive flexibility has traditionally been considered a frontal lobe function. However, converging evidence suggests involvement of a larger brain circuit which includes the cerebellum. Reciprocal pathways connecting the cerebellum to the prefrontal cortex provide a biological substrate through which the cerebellum may modulate higher cognitive functions, and it has been observed that cognitive inflexibility and cerebellar pathology co-occur in psychiatric disorders (e.g., autism, schizophrenia, addiction). However, the degree to which the cerebellum contributes to distinct forms of cognitive flexibility and rule learning is unknown. We tested lurcher↔wildtype aggregation chimeras which lose 0-100% of cerebellar Purkinje cells during development on a touchscreen-mediated attentional set-shifting task to assess the contribution of the cerebellum to higher and lower order rule learning and cognitive flexibility. Purkinje cells, the sole output of the cerebellar cortex, ranged from 0 to 108,390 in tested mice. Reversal learning and extradimensional set-shifting were impaired in mice with⩾95% Purkinje cell loss. Cognitive deficits were unrelated to motor deficits in ataxic mice. Acquisition of a simple visual discrimination and an attentional-set were unrelated to Purkinje cells. A positive relationship was observed between Purkinje cells and errors when exemplars from a novel, non-relevant dimension were introduced. Collectively, these data suggest that the cerebellum contributes to higher order cognitive flexibility, lower order cognitive flexibility, and attention to novel stimuli, but not the acquisition of higher and lower order rules. These data indicate that the cerebellar pathology observed in psychiatric disorders may underlie deficits involving cognitive flexibility and attention to novel stimuli.

  19. A Step Into Service Learning Is A Step Into Higher Order Thinking

    NASA Astrophysics Data System (ADS)

    O'Connell, S.

    2010-12-01

    Students, especially beginning college students often consider science courses to be remembering and regurgitating, not creative and of little social relevance. As scientists we know this isn’t true. How do we counteract this sentiment among students? Incorporating service learning, probably better-called project learning into our class is one way. As one “non-science” student, who was taking two science service-learning courses said, “If it’s a service-learning course you know it’s going to be interesting.” Service learning means that some learning takes place in the community. The community component increases understanding of the material being studied, promotes higher order thinking, and provides a benefit for someone else. Students have confirmed that the experience shows them that their knowledge is need by the community and for some, reinforces their commitment to continued civic engagement. I’ll give three examples with the community activity growing in importance in the course and in the community, a single exercise, a small project, and a focus of the class. All of the activities use reflective writing to increase analysis and synthesis. An example of a single exercise could be participating in an event related to your course, for example, a zoning board meeting, or a trip to a wastewater treatment plant. Preparation for the trip should include reading. After the event students synthesize and analyze the activity through a series of questions emphasizing reflection. A two to four class assignment might include expanding the single-day activity or students familiarizing themselves with a course topic, interviewing a person, preparing a podcast of the interview and reflecting upon the experience. The most comprehensive approach is where the class focus is on a community project, e.g. Tim Ku’s geochemistry course (this session). Another class that lends itself easily to a comprehensive service learning approach is Geographic Information

  20. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  1. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  2. Audiovisual Media and the Disabled. AV in Action 1.

    ERIC Educational Resources Information Center

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2)…

  3. The Audio-Visual Marketing Handbook for Independent Schools.

    ERIC Educational Resources Information Center

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  4. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  5. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  6. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  7. Strategy-effects in prefrontal cortex during learning of higher-order S-R rules.

    PubMed

    Wolfensteller, Uta; von Cramon, D Yves

    2011-07-15

    All of us regularly face situations that require the integration of the available information at hand with the established rules that guide behavior in order to generate the most appropriate action. But where individuals differ from one another is most certainly in terms of the different strategies that are adopted during this process. A previous study revealed differential brain activation patterns for the implementation of well established higher-order stimulus-response (S-R) rules depending on inter-individual strategy differences (Wolfensteller and von Cramon, 2010). This raises the question of how these strategies evolve or which neurocognitive mechanisms underlie these inter-individual strategy differences. Using functional magnetic resonance imaging (fMRI), the present study revealed striking strategy-effects across regions of the lateral prefrontal cortex during the implementation of higher-order S-R rules at an early stage of learning. The left rostrolateral prefrontal cortex displayed a quantitative strategy-effect, such that activation during rule integration based on a mismatch was related to the degree to which participants continued to rely on rule integration. A quantitative strategy ceiling effect was observed for the left inferior frontal junction area. Conversely, the right inferior frontal gyrus displayed a qualitative strategy-effect such that participants who at a later point relied on an item-based strategy showed stronger activations in this region compared to those who continued with the rule integration strategy. Together, the present findings suggest that a certain amount of rule integration is mandatory when participants start to learn higher-order rules. The more efficient item-based strategy that evolves later appears to initially require the recruitment of additional cognitive resources in order to shield the currently relevant S-R association from interfering information.

  8. Interpolation-based reduced-order modelling for steady transonic flows via manifold learning

    NASA Astrophysics Data System (ADS)

    Franz, T.; Zimmermann, R.; Görtz, S.; Karcher, N.

    2014-03-01

    This paper presents a parametric reduced-order model (ROM) based on manifold learning (ML) for use in steady transonic aerodynamic applications. The main objective of this work is to derive an efficient ROM that exploits the low-dimensional nonlinear solution manifold to ensure an improved treatment of the nonlinearities involved in varying the inflow conditions to obtain an accurate prediction of shocks. The reduced-order representation of the data is derived using the Isomap ML method, which is applied to a set of sampled computational fluid dynamics (CFD) data. In order to develop a ROM that has the ability to predict approximate CFD solutions at untried parameter combinations, Isomap is coupled with an interpolation method to capture the variations in parameters like the angle of attack or the Mach number. Furthermore, an approximate local inverse mapping from the reduced-order representation to the full CFD solution space is introduced. The proposed ROM, called Isomap+I, is applied to the two-dimensional NACA 64A010 airfoil and to the 3D LANN wing. The results are compared to those obtained by proper orthogonal decomposition plus interpolation (POD+I) and to the full-order CFD model.

  9. The Natural Statistics of Audiovisual Speech

    PubMed Central

    Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A.

    2009-01-01

    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver. PMID:19609344

  10. The natural statistics of audiovisual speech.

    PubMed

    Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A

    2009-07-01

    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  11. Infant perception of audio-visual speech synchrony in familiar and unfamiliar fluent speech.

    PubMed

    Pons, Ferran; Lewkowicz, David J

    2014-06-01

    We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.

  12. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  13. The Use of Audio-Visual Aids in Teaching: A Study in the Saudi Girls Colleges.

    ERIC Educational Resources Information Center

    Al-Sharhan, Jamal A.

    1993-01-01

    A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…

  14. The impact of constructivist teaching strategies on the acquisition of higher order cognition and learning

    NASA Astrophysics Data System (ADS)

    Merrill, Alison Saricks

    The purpose of this quasi-experimental quantitative mixed design study was to compare the effectiveness of brain-based teaching strategies versus a traditional lecture format in the acquisition of higher order cognition as determined by test scores. A second purpose was to elicit student feedback about the two teaching approaches. The design was a 2 x 2 x 2 factorial design study with repeated measures on the last factor. The independent variables were type of student, teaching method, and a within group change over time. Dependent variables were a between group comparison of pre-test, post-test gain scores and a within and between group comparison of course examination scores. A convenience sample of students enrolled in medical-surgical nursing was used. One group (n=36) was made up of traditional students and the other group (n=36) consisted of second-degree students. Four learning units were included in this study. Pre- and post-tests were given on the first two units. Course examinations scores from all four units were compared. In one cohort two of the units were taught via lecture format and two using constructivist activities. These methods were reversed for the other cohort. The conceptual basis for this study derives from neuroscience and cognitive psychology. Learning is defined as the growth of new dendrites. Cognitive psychologists view learning as a constructive activity in which new knowledge is built on an internal foundation of existing knowledge. Constructivist teaching strategies are designed to stimulate the brain's natural learning ability. There was a statistically significant difference based on type of teaching strategy (t = -2.078, df = 270, p = .039, d = .25)) with higher mean scores on the examinations covering brain-based learning units. There was no statistical significance based on type of student. Qualitative data collection was conducted in an on-line forum at the end of the semester. Students had overall positive responses about the

  15. Lexical Learning in Bilingual Adults: The Relative Importance of Short-Term Memory for Serial Order and Phonological Knowledge

    ERIC Educational Resources Information Center

    Majerus, Steve; Poncelet, Martine; Van der Linden, Martial; Weekes, Brendan S.

    2008-01-01

    Studies of monolingual speakers have shown a strong association between lexical learning and short-term memory (STM) capacity, especially STM for serial order information. At the same time, studies of bilingual speakers suggest that phonological knowledge is the main factor that drives lexical learning. This study tested these two hypotheses…

  16. GRAPE - GIS Repetition Using Audio-Visual Repetition Units and its Leanring Effectiveness

    NASA Astrophysics Data System (ADS)

    Niederhuber, M.; Brugger, S.

    2011-09-01

    A new audio-visual learning medium has been developed at the Department of Environmental Sciences at ETH Zurich (Switzerland), for use in geographical information sciences (GIS) courses. This new medium, presented in the form of Repetition Units, allows students to review and consolidate the most important learning concepts on an individual basis. The new material consists of: a) a short enhanced podcast (recorded and spoken slide show) with a maximum duration of 5 minutes, which focuses on only one important aspect of a lecture's theme; b) one or two relevant exercises, covering different cognitive levels of learning, with a maximum duration of 10 minutes; and c), solutions for the exercises. During a pilot phase in 2010, six Repetition Units were produced by the lecturers. Twenty more Repetition Units will be produced by our students during the fall semester of 2011 and 2012. The project is accompanied by a 5-year study (2009 - 2013) that investigates learning success using the new material, focussing on the question, whether or not the new material help to consolidate and refresh basic GIS knowledge. It will be analysed based on longitudinal studies. Initial results indicate that the new medium helps to refresh knowledge as the test groups scored higher than the control group. These results are encouraging and suggest that the new material with its combination of short audio-visual podcasts and relevant exercises help to consolidate students' knowledge.

  17. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    PubMed Central

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  18. Audiovisual Interval Size Estimation Is Associated with Early Musical Training

    PubMed Central

    Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134

  19. Going Beyond a Mean-field Model for the Learning Cortex: Second-Order Statistics

    PubMed Central

    Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Sleigh, J. W.

    2008-01-01

    Mean-field models of the cortex have been used successfully to interpret the origin of features on the electroencephalogram under situations such as sleep, anesthesia, and seizures. In a mean-field scheme, dynamic changes in synaptic weights can be considered through fluctuation-based Hebbian learning rules. However, because such implementations deal with population-averaged properties, they are not well suited to memory and learning applications where individual synaptic weights can be important. We demonstrate that, through an extended system of equations, the mean-field models can be developed further to look at higher-order statistics, in particular, the distribution of synaptic weights within a cortical column. This allows us to make some general conclusions on memory through a mean-field scheme. Specifically, we expect large changes in the standard deviation of the distribution of synaptic weights when fluctuation in the mean soma potentials are large, such as during the transitions between the “up” and “down” states of slow-wave sleep. Moreover, a cortex that has low structure in its neuronal connections is most likely to decrease its standard deviation in the weights of excitatory to excitatory synapses, relative to the square of the mean, whereas a cortex with strongly patterned connections is most likely to increase this measure. This suggests that fluctuations are used to condense the coding of strong (presumably useful) memories into fewer, but dynamic, neuron connections, while at the same time removing weaker (less useful) memories. PMID:19669541

  20. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  1. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  2. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  3. Modeling Stock Order Flows and Learning Market-Making from Data

    DTIC Science & Technology

    2002-06-01

    and demand. In this paper, we demonstrate a novel method for modeling the market as a dynamic system and a reinforcement learning algorithm that learns...difficult dynamic system. Our reinforcement learning algorithm, based on likelihood ratios, is run on this partially-observable environment. We demonstrate learning results for two separate real stocks.

  4. Effects of higher-order cognitive strategy training on gist-reasoning and fact-learning in adolescents.

    PubMed

    Gamino, Jacquelyn F; Chapman, Sandra B; Hull, Elizabeth L; Lyon, G Reid

    2010-01-01

    Improving the reasoning skills of adolescents across the United States has become a major concern for educators and scientists who are dedicated to identifying evidence-based protocols to improve student outcome. This small sample randomized, control pilot study sought to determine the efficacy of higher-order cognitive training on gist-reasoning and fact-learning in an inner-city public middle school. The study compared gist-reasoning and fact-learning performances after training in a smaller sample when tested in Spanish, many of the students' native language, versus English. The 54 eighth grade students who participated in this pilot study were enroled in an urban middle school, predominantly from lower socio-economic status families, and were primarily of minority descent. The students were randomized into one of three groups, one that learned cognitive strategies promoting abstraction of meaning, a group that learned rote memory strategies, or a control group to ascertain the impact of each program on gist-reasoning and fact-learning from text-based information. We found that the students who had cognitive strategy instruction that entailed abstraction of meaning significantly improved their gist-reasoning and fact-learning ability. The students who learned rote memory strategies significantly improved their fact-learning scores from a text but not gist-reasoning ability. The control group showed no significant change in either gist-reasoning or fact-learning ability. A trend toward significant improvement in overall reading scores for the group that learned to abstract meaning as well as a significant correlation between gist-reasoning ability and the critical thinking on a state-mandated standardized reading test was also found. There were no significant differences between English and Spanish performance of gist-reasoning and fact-learning. Our findings suggest that teaching higher-order cognitive strategies facilitates gist-reasoning ability and student

  5. The Effects of Audio-Visual Recorded and Audio Recorded Listening Tasks on the Accuracy of Iranian EFL Learners' Oral Production

    ERIC Educational Resources Information Center

    Drood, Pooya; Asl, Hanieh Davatgari

    2016-01-01

    The ways in which task in classrooms has developed and proceeded have receive great attention in the field of language teaching and learning in the sense that they draw attention of learners to the competing features such as accuracy, fluency, and complexity. English audiovisual and audio recorded materials have been widely used by teachers and…

  6. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  7. Teleconferences and Audiovisual Materials in Earth Science Education

    NASA Astrophysics Data System (ADS)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  8. An audiovisual database of English speech sounds

    NASA Astrophysics Data System (ADS)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  9. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  10. Audio-visual assistance in co-creating transition knowledge

    NASA Astrophysics Data System (ADS)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  11. The Effect of Number and Presentation Order of High-Constraint Sentences on Second Language Word Learning.

    PubMed

    Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo

    2016-01-01

    This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input.

  12. The Effect of Number and Presentation Order of High-Constraint Sentences on Second Language Word Learning

    PubMed Central

    Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo

    2016-01-01

    This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input. PMID:27695432

  13. The Role of Visual Learning in Improving Students' High-Order Thinking Skills

    ERIC Educational Resources Information Center

    Raiyn, Jamal

    2016-01-01

    Various concepts have been introduced to improve students' analytical thinking skills based on problem based learning (PBL). This paper introduces a new concept to increase student's analytical thinking skills based on a visual learning strategy. Such a strategy has three fundamental components: a teacher, a student, and a learning process. The…

  14. Effects of the audiovisual conflict on auditory early processes.

    PubMed

    Scannella, Sébastien; Causse, Mickaël; Chauveau, Nicolas; Pastor, Josette; Dehais, Frédéric

    2013-07-01

    Auditory alarm misperception is one of the critical events that lead aircraft pilots to an erroneous flying decision. The rarity of these alarms associated with their possible unreliability may play a role in this misperception. In order to investigate this hypothesis, we manipulated both audiovisual conflict and sound rarity in a simplified landing task. Behavioral data and event related potentials (ERPs) of thirteen healthy participants were analyzed. We found that the presentation of a rare auditory signal (i.e., an alarm), incongruent with visual information, led to a smaller amplitude of the auditory N100 (i.e., less negative) compared to the condition in which both signals were congruent. Moreover, the incongruity between the visual information and the rare sound did not significantly affect reaction times, suggesting that the rare sound was neglected. We propose that the lower N100 amplitude reflects an early visual-to-auditory gating that depends on the rarity of the sound. In complex aircraft environments, this early effect might be partly responsible for auditory alarm insensitivity. Our results provide a new basis for future aeronautic studies and the development of countermeasures.

  15. Engineering the path to higher-order thinking in elementary education: A problem-based learning approach for STEM integration

    NASA Astrophysics Data System (ADS)

    Rehmat, Abeera Parvaiz

    As we progress into the 21st century, higher-order thinking skills and achievement in science and math are essential to meet the educational requirement of STEM careers. Educators need to think of innovative ways to engage and prepare students for current and future challenges while cultivating an interest among students in STEM disciplines. An instructional pedagogy that can capture students' attention, support interdisciplinary STEM practices, and foster higher-order thinking skills is problem-based learning. Problem-based learning embedded in the social constructivist view of teaching and learning (Savery & Duffy, 1995) promotes self-regulated learning that is enhanced through exploration, cooperative social activity, and discourse (Fosnot, 1996). This quasi-experimental mixed methods study was conducted with 98 fourth grade students. The study utilized STEM content assessments, a standardized critical thinking test, STEM attitude survey, PBL questionnaire, and field notes from classroom observations to investigate the impact of problem-based learning on students' content knowledge, critical thinking, and their attitude towards STEM. Subsequently, it explored students' experiences of STEM integration in a PBL environment. The quantitative results revealed a significant difference between groups in regards to their content knowledge, critical thinking skills, and STEM attitude. From the qualitative results, three themes emerged: learning approaches, increased interaction, and design and engineering implementation. From the overall data set, students described the PBL environment to be highly interactive that prompted them to employ multiple approaches, including design and engineering to solve the problem.

  16. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.

  17. Statistical learning of an auditory sequence and reorganization of acquired knowledge: A time course of word segmentation and ordering.

    PubMed

    Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato

    2017-01-27

    Previous neural studies have supported the hypothesis that statistical learning mechanisms are used broadly across different domains such as language and music. However, these studies have only investigated a single aspect of statistical learning at a time, such as recognizing word boundaries or learning word order patterns. In this study, we neutrally investigated how the two levels of statistical learning for recognizing word boundaries and word ordering could be reflected in neuromagnetic responses and how acquired statistical knowledge is reorganised when the syntactic rules are revised. Neuromagnetic responses to the Japanese-vowel sequence (a, e, i, o, and u), presented every .45s, were recorded from 14 right-handed Japanese participants. The vowel order was constrained by a Markov stochastic model such that five nonsense words (aue, eao, iea, oiu, and uoi) were chained with an either-or rule: the probability of the forthcoming word was statistically defined (80% for one word; 20% for the other word) by the most recent two words. All of the word transition probabilities (80% and 20%) were switched in the middle of the sequence. In the first and second quarters of the sequence, the neuromagnetic responses to the words that appeared with higher transitional probability were significantly reduced compared with those that appeared with a lower transitional probability. After switching the word transition probabilities, the response reduction was replicated in the last quarter of the sequence. The responses to the final vowels in the words were significantly reduced compared with those to the initial vowels in the last quarter of the sequence. The results suggest that both within-word and between-word statistical learning are reflected in neural responses. The present study supports the hypothesis that listeners learn larger structures such as phrases first, and they subsequently extract smaller structures, such as words, from the learned phrases. The present

  18. Website Analysis as a Tool for Task-Based Language Learning and Higher Order Thinking in an EFL Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo

    2014-01-01

    Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…

  19. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: a qualitative study

    PubMed Central

    2013-01-01

    Background Computerized Provider Order Entry (CPOE) can improve patient safety, quality and efficiency, but hospitals face a host of barriers to adopting CPOE, ranging from resistance among physicians to the cost of the systems. In response to the incentives for meaningful use of health information technology and other market forces, hospitals in the United States are increasingly moving toward the adoption of CPOE. The purpose of this study was to characterize the experiences of hospitals that have successfully implemented CPOE. Methods We used a qualitative approach to observe clinical activities and capture the experiences of physicians, nurses, pharmacists and administrators at five community hospitals in Massachusetts (USA) that adopted CPOE in the past few years. We conducted formal, structured observations of care processes in diverse inpatient settings within each of the hospitals and completed in-depth, semi-structured interviews with clinicians and staff by telephone. After transcribing the audiorecorded interviews, we analyzed the content of the transcripts iteratively, guided by principles of the Immersion and Crystallization analytic approach. Our objective was to identify attitudes, behaviors and experiences that would constitute useful lessons for other hospitals embarking on CPOE implementation. Results Analysis of observations and interviews resulted in findings about the CPOE implementation process in five domains: governance, preparation, support, perceptions and consequences. Successful institutions implemented clear organizational decision-making mechanisms that involved clinicians (governance). They anticipated the need for education and training of a wide range of users (preparation). These hospitals deployed ample human resources for live, in-person training and support during implementation. Successful implementation hinged on the ability of clinical leaders to address and manage perceptions and the fear of change. Implementation proceeded

  20. A New World Order: Connecting Adult Developmental Theory to Learning Disabilities.

    ERIC Educational Resources Information Center

    Price, Lynda; Patton, James R.

    2003-01-01

    This article explores new connections between the current literature base on adult developmental theory and the field of learning disabilities. Emphasis is on theory and practice in self-determination and adult development. Implications for special education, vocational education, general education, and adult learning are discussed. (Contains…

  1. Granularity and the Acquisition of Grammatical Gender: How Order-of-Acquisition Affects What Gets Learned

    ERIC Educational Resources Information Center

    Arnon, Inbal; Ramscar, Michael

    2012-01-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost…

  2. PBL-GIS in Secondary Geography Education: Does It Result in Higher-Order Learning Outcomes?

    ERIC Educational Resources Information Center

    Liu, Yan; Bui, Elisabeth N.; Chang, Chew-Hung; Lossman, Hans G.

    2010-01-01

    This article presents research on evaluating problem-based learning using GIS technology in a Singapore secondary school. A quasi-experimental research design was carried to test the PBL pedagogy (PBL-GIS) with an experimental group of students and compare their learning outcomes with a control group who were exposed to PBL but not GIS. The…

  3. Auditory and audiovisual inhibition of return.

    PubMed

    Spence, C; Driver, J

    1998-01-01

    Two experiments examined any inhibition-of-return (IOR) effects from auditory cues and from preceding auditory targets upon reaction times (RTs) for detecting subsequent auditory targets. Auditory RT was delayed if the preceding auditory cue was on the same side as the target, but was unaffected by the location of the auditory target from the preceding trial, suggesting that response inhibition for the cue may have produced its effects. By contrast, visual detection RT was inhibited by the ipsilateral presentation of a visual target on the preceding trial. In a third experiment, targets could be unpredictably auditory or visual, and no peripheral cues intervened. Both auditory and visual detection RTs were now delayed following an ipsilateral versus contralateral target in either modality on the preceding trial, even when eye position was monitored to ensure central fixation throughout. These data suggest that auditory target-target IOR arises only when target modality is unpredictable. They also provide the first unequivocal evidence for cross-modal IOR, since, unlike other recent studies (e.g., Reuter-Lorenz, Jha, & Rosenquist, 1996; Tassinari & Berlucchi, 1995; Tassinari & Campara, 1996), the present cross-modal effects cannot be explained in terms of response inhibition for the cue. The results are discussed in relation to neurophysiological studies and audiovisual links in saccade programming.

  4. Granularity and the acquisition of grammatical gender: how order-of-acquisition affects what gets learned.

    PubMed

    Arnon, Inbal; Ramscar, Michael

    2012-03-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost invariably struggle to master. We present a model of learning that predicts that exposure to smaller units (such as nouns) before exposure to larger linguistic units (such as sentences) can critically impair learning about predictive relations between units: such as that between a noun and its article. This prediction is then confirmed by a study of adult participants learning grammatical gender in an artificial language. Adults learned both nouns and their articles better when they were first heard nouns used in context with their articles prior to hearing the nouns individually, compared with learners who first heard the nouns in isolation, prior to hearing them used in context. In the light of these results, we discuss the role gender appears to play in language, the importance of meaning in artificial grammar learning, and the implications of this work for the structure of L2-training.

  5. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  6. Learning higher-order generalizations through free play: Evidence from 2- and 3-year-old children.

    PubMed

    Sim, Zi L; Xu, Fei

    2017-04-01

    Constructivist views of cognitive development often converge on 2 key points: (a) the child's goal is to build large conceptual structures for understanding the world, and (b) the child plays an active role in developing these structures. While previous research has demonstrated that young children show a precocious capacity for concept and theory building when they are provided with helpful data within training settings, and that they explore their environment in ways that may promote learning, it remains an open question whether young children are able to build larger conceptual structures using self-generated evidence, a form of active learning. In the current study, we examined whether children can learn high-order generalizations (which form the basis for larger conceptual structures) through free play, and whether they can do so as effectively as when provided with relevant data. Results with 2- and 3-year-old children over 4 experiments indicate robust learning through free play, and generalization performance was comparable between free play and didactic conditions. Therefore, young children's self-directed learning supports the development of higher-order generalizations, laying the foundation for building larger conceptual structures and intuitive theories. (PsycINFO Database Record

  7. A Bayesian model of biases in artificial language learning: the case of a word-order universal.

    PubMed

    Culbertson, Jennifer; Smolensky, Paul

    2012-01-01

    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners' input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word-order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross-linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners' inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization-Greenberg's Universal 18-which bans a particular word-order pattern relating nouns, adjectives, and numerals.

  8. Promoting Higher Order Thinking Skills via IPTEACES e-Learning Framework in the Learning of Information Systems Units

    ERIC Educational Resources Information Center

    Isaias, Pedro; Issa, Tomayess; Pena, Nuno

    2014-01-01

    When developing and working with various types of devices from a supercomputer to an iPod Mini, it is essential to consider the issues of Human Computer Interaction (HCI) and Usability. Developers and designers must incorporate HCI, Usability and user satisfaction in their design plans to ensure that systems are easy to learn, effective,…

  9. The Effects of Variation on Learning Word Order Rules by Adults with and without Language-Based Learning Disabilities

    ERIC Educational Resources Information Center

    Grunow, Hope; Spaulding, Tammie J.; Gomez, Rebecca L.; Plante, Elena

    2006-01-01

    Non-adjacent dependencies characterize numerous features of English syntax, including certain verb tense structures and subject-verb agreement. This study utilized an artificial language paradigm to examine the contribution of item variability to the learning of these types of dependencies. Adult subjects with and without language-based learning…

  10. Neural correlates of audiovisual speech processing in a second language.

    PubMed

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  11. The Black Record: A Selective Discography of Afro-Americana on Audio Discs Held by the Audio/Visual Department, John M. Olin Library.

    ERIC Educational Resources Information Center

    Dain, Bernice, Comp.; Nevin, David, Comp.

    The present revised and expanded edition of this document is an inclusive cumulation. A few items have been included which are on order as new to the collection or as replacements. This discography is intended to serve primarily as a local user's guide. The call number preceding each entry is based on the Audio-Visual Department's own, unique…

  12. Order of selection in vocational rehabilitation: implications for the transition from school to adult outcomes for youths with learning disabilities.

    PubMed

    Bellini, James; Royce-Davis, Joanna

    1999-01-01

    Interagency cooperation between special education and vocational rehabilitation (VR) is central to ensuring the continuity of services to young adults with disabilities who are in transition from school to adult living. However, the interface between special education and VR may be complicated by order of selection, an equally binding mandate in federal VR policy to provide priority services to individuals with the most severe disabilities. Because students with learning disabilities are typically perceived as having mild rather than severe disabilities, these youths are most at risk for falling through the cracks in the service landscape once they leave the school setting in states where the VR agency is implementing an order of selection procedure. This article identifies and discusses common impediments to collaborative transition planning for students with learning disabilities that may be intensified when the state VR agency is operating under an order of selection plan. Recommendations are provided to facilitate greater interagency cooperation among schools and VR agencies so that transition planning and implementation for students with learning disabilities is not subverted as a result of the order of selection mandate.

  13. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  14. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  15. Learning to represent spatial transformations with factored higher-order Boltzmann machines.

    PubMed

    Memisevic, Roland; Hinton, Geoffrey E

    2010-06-01

    To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.

  16. Evaluating the influence of the 'unity assumption' on the temporal perception of realistic audiovisual stimuli.

    PubMed

    Vatakis, Argiro; Spence, Charles

    2008-01-01

    Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.

  17. Complimentary lower-level and higher-order systems underpin imitation learning.

    PubMed

    Andrew, Matthew; Bennett, Simon J; Elliott, Digby; Hayes, Spencer J

    2016-04-01

    We examined whether the temporal representation developed during motor training with reduced-frequency knowledge of results (KR; feedback available on every other trial) was transferred to an imitation learning task. To this end, four groups first practised a three-segment motor sequence task with different KR protocols. Two experimental groups received reduced-frequency KR, one group received high-frequency KR (feedback available on every trial), and one received no-KR. Compared to the no-KR group, the groups that received KR learned the temporal goal of the movement sequence, as evidenced by increased accuracy and consistency across training. Next, all groups learned a single-segment movement that had the same temporal goal as the motor sequence task but required the imitation of biological and nonbiological motion kinematics. Kinematic data showed that whilst all groups imitated biological motion kinematics, the two experimental reduced-frequency KR groups were on average ∼ 800 ms more accurate at imitating movement time than the high-frequency KR and no-KR groups. The interplay between learning biological motion kinematics and the transfer of temporal representation indicates imitation involves distinct, but complementary lower-level sensorimotor and higher-level cognitive processing systems.

  18. Rhesus Monkeys (Macaca Mulatta) Maintain Learning Set Despite Second-Order Stimulus-Response Spatial Discontiguity

    ERIC Educational Resources Information Center

    Beran, Michael J.; Washburn, David A.; Rumbaugh, Duane M.

    2007-01-01

    In many discrimination-learning tests, spatial separation between stimuli and response loci disrupts performance in rhesus macaques. However, monkeys are unaffected by such stimulus-response spatial discontiguity when responses occur through joystick-based computerized movement of a cursor. To examine this discrepancy, five monkeys were tested on…

  19. Authentic Role-Playing as Situated Learning: Reframing Teacher Education Methodology for Higher-Order Thinking

    ERIC Educational Resources Information Center

    Leaman, Lori Hostetler; Flanagan, Toni Michele

    2013-01-01

    This article draws from situated learning theory, teacher education research, and the authors' collaborative self-study to propose a teacher education pedagogy that may help to bridge the theory-into-practice gap for preservice teachers. First, we review the Interstate Teacher Assessment and Support Consortium standards to confirm the call for…

  20. Audio-Visual Space Reorganization Study. RDU-75-05.

    ERIC Educational Resources Information Center

    Baker, Martha

    Space layout and work flow patterns in the Audiovisual Center at Purdue University were studied with respect to effective space utilization and the need for planning space requirements in relationship to the activities being performed. Space and work areas were reorganized to facilitate the flow of work and materials between areas, and equipment…

  1. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  2. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  3. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  4. Auditory Event-Related Potentials (ERPs) in Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Pilling, Michael

    2009-01-01

    Purpose: It has recently been reported (e.g., V. van Wassenhove, K. W. Grant, & D. Poeppel, 2005) that audiovisual (AV) presented speech is associated with an N1/P2 auditory event-related potential (ERP) response that is lower in peak amplitude compared with the responses associated with auditory only (AO) speech. This effect was replicated.…

  5. Guide to Audiovisual Terminology. Product Information Supplement, Number 6.

    ERIC Educational Resources Information Center

    Trzebiatowski, Gregory, Ed.

    1968-01-01

    The terms appearing in this glossary have been specifically selected for use by educators from a larger text, which was prepared by the Commission on Definition and Terminology of the Department of Audiovisual Instruction of the National Education Association. Specialized areas covered in the glossary include audio reproduction, audiovisual…

  6. Facilitating Personality Change with Audiovisual Self-confrontation and Interviews.

    ERIC Educational Resources Information Center

    Alker, Henry A.; And Others

    Two studies are reported, each of which achieves personality change with both audiovisual self-confrontation (AVSC) and supportive, nondirective interviews. The first study used Ericksonian identity achievement as a dependent variable. Sixty-one male subjects were measured using Anne Constantinople's inventory. The results of this study…

  7. Audiovisual Aids and Techniques in Managerial and Supervisory Training.

    ERIC Educational Resources Information Center

    Rigg, Robinson P.

    An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…

  8. PRECIS for Subject Access in a National Audiovisual Information System.

    ERIC Educational Resources Information Center

    Bidd, Donald; And Others

    1986-01-01

    This overview of PRECIS indexing system use by the National Film Board of Canada covers reasons for its choice, challenge involved in subject analysis and indexing of audiovisual documents, the methodology and software used to process PRECIS records, the resulting catalog subject indexes, and user reaction. Twenty-one references are cited. (EJS)

  9. Adaptation to audiovisual asynchrony modulates the speeded detection of sound

    PubMed Central

    Navarra, Jordi; Hartcher-O'Brien, Jessica; Piazza, Elise; Spence, Charles

    2009-01-01

    The brain adapts to asynchronous audiovisual signals by reducing the subjective temporal lag between them. However, it is currently unclear which sensory signal (visual or auditory) shifts toward the other. According to the idea that the auditory system codes temporal information more precisely than the visual system, one should expect to find some temporal shift of vision toward audition (as in the temporal ventriloquism effect) as a result of adaptation to asynchronous audiovisual signals. Given that visual information gives a more exact estimate of the time of occurrence of distal events than auditory information (due to the fact that the time of arrival of visual information regarding an external event is always closer to the time at which this event occurred), the opposite result could also be expected. Here, we demonstrate that participants' speeded reaction times (RTs) to auditory (but, critically, not visual) stimuli are altered following adaptation to asynchronous audiovisual stimuli. After receiving “baseline” exposure to synchrony, participants were exposed either to auditory-lagging asynchrony (VA group) or to auditory-leading asynchrony (AV group). The results revealed that RTs to sounds became progressively faster (in the VA group) or slower (in the AV group) as participants' exposure to asynchrony increased, thus providing empirical evidence that speeded responses to sounds are influenced by exposure to audiovisual asynchrony. PMID:19458252

  10. Audiovisual Integration in Noise by Children and Adults

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  11. Audio-Visual Training in Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Magnan, Annie; Ecalle, Jean

    2006-01-01

    This study tested the effectiveness of audio-visual training in the discrimination of the phonetic feature of voicing on the recognition of written words by young children deemed to at risk of dyslexia (experiment 1) as well as on dyslexic children's phonological skills (experiment 2). In addition, the third experiment studied the effectiveness of…

  12. Summary of Findings and Recommendations on Federal Audiovisual Activities.

    ERIC Educational Resources Information Center

    Lissit, Robert; And Others

    At the direction of President Carter, a year-long study of government audiovisual programs was conducted out of the Office of Telecommunications Policy in the Executive Office of the President. The programs in 16 departments and independent agencies, and the departments of the Army, Navy, and Air Force have been reviewed to identify the scope of…

  13. Design for Safety: The Audiovisual Cart Hazard Revisited.

    ERIC Educational Resources Information Center

    Sherry, Annette C.; Strojny, Allan

    1993-01-01

    Discussion of the design of carts for moving audiovisual equipment in schools emphasizes safety factors. Topics addressed include poor design of top-heavy carts that has led to deaths and injuries; cart navigation; new manufacturing standards; and an alternative, safer cart design. (Contains 13 references.) (LRW)

  14. Producing Slide and Tape Presentations: Readings from "Audiovisual Instruction"--4.

    ERIC Educational Resources Information Center

    Hitchens, Howard, Ed.

    Designed to serve as a reference and source of ideas on the use of slides in combination with audiocassettes for presentation design, this book of readings from Audiovisual Instruction magazine includes three papers providing basic tips on putting together a presentation, five articles describing techniques for improving the visual images, five…

  15. An Audio-Visual Lecture Course in Russian Culture

    ERIC Educational Resources Information Center

    Leighton, Lauren G.

    1977-01-01

    An audio-visual course in Russian culture is given at Northern Illinois University. A collection of 4-5,000 color slides is the basis for the course, with lectures focussed on literature, philosophy, religion, politics, art and crafts. Acquisition, classification, storage and presentation of slides, and organization of lectures are discussed. (CHK)

  16. Audio-Visual Equipment Depreciation. RDU-75-07.

    ERIC Educational Resources Information Center

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  17. Effect of Audiovisual Cancer Programs on Patients and Families.

    ERIC Educational Resources Information Center

    Cassileth, Barrie R.; And Others

    1982-01-01

    Four audiovisual programs about cancer and cancer treatment were evaluated. Cancer patients, their families, and friends were asked to complete questionnaires before and after watching a program to determine the effects of the program on their knowledge of cancer, anxiety levels, and perceived ability to communicate with the staff. (Author/MLW)

  18. Selected Bibliography and Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This guide to resource materials on environmental education is in two sections: 1) Selected Bibliography of Printed Materials, compiled in April, 1970; and, 2) Audio-Visual materials, Films and Filmstrips, compiled in February, 1971. 99 book annotations are given with an indicator of elementary, junior or senior high school levels. Other book…

  19. Authentic Instruction for 21st Century Learning: Higher Order Thinking in an Inclusive School

    ERIC Educational Resources Information Center

    Preus, Betty

    2012-01-01

    The author studied a public junior high school identified as successfully implementing authentic instruction. Such instruction emphasizes higher order thinking, deep knowledge, substantive conversation, and value beyond school. To determine in what ways higher order thinking was fostered both for students with and without disabilities, the author…

  20. Assessment of Higher Order Thinking Skills. Current Perspectives on Cognition, Learning and Instruction

    ERIC Educational Resources Information Center

    Schraw, Gregory, Ed.; Robinson, Daniel H., Ed.

    2011-01-01

    This volume examines the assessment of higher order thinking skills from the perspectives of applied cognitive psychology and measurement theory. The volume considers a variety of higher order thinking skills, including problem solving, critical thinking, argumentation, decision making, creativity, metacognition, and self-regulation. Fourteen…

  1. Higher Order Thinking Skills among Secondary School Students in Science Learning

    ERIC Educational Resources Information Center

    Saido, Gulistan Mohammed; Siraj, Saedah; Bin Nordin, Abu Bakar; Al Amedy, Omed Saadallah

    2015-01-01

    A central goal of science education is to help students to develop their higher order thinking skills to enable them to face the challenges of daily life. Enhancing students' higher order thinking skills is the main goal of the Kurdish Science Curriculum in the Iraqi-Kurdistan region. This study aimed at assessing 7th grade students' higher order…

  2. Neural correlates of audiovisual integration in music reading.

    PubMed

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration.

  3. Context-specific effects of musical expertise on audiovisual integration.

    PubMed

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  4. [Ventriloquism and audio-visual integration of voice and face].

    PubMed

    Yokosawa, Kazuhiko; Kanaya, Shoko

    2012-07-01

    Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.

  5. Context-specific effects of musical expertise on audiovisual integration

    PubMed Central

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  6. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  7. The Max-Min High-Order Dynamic Bayesian Network for Learning Gene Regulatory Networks with Time-Delayed Regulations.

    PubMed

    Li, Yifeng; Chen, Haifen; Zheng, Jie; Ngom, Alioune

    2016-01-01

    Accurately reconstructing gene regulatory network (GRN) from gene expression data is a challenging task in systems biology. Although some progresses have been made, the performance of GRN reconstruction still has much room for improvement. Because many regulatory events are asynchronous, learning gene interactions with multiple time delays is an effective way to improve the accuracy of GRN reconstruction. Here, we propose a new approach, called Max-Min high-order dynamic Bayesian network (MMHO-DBN) by extending the Max-Min hill-climbing Bayesian network technique originally devised for learning a Bayesian network's structure from static data. Our MMHO-DBN can explicitly model the time lags between regulators and targets in an efficient manner. It first uses constraint-based ideas to limit the space of potential structures, and then applies search-and-score ideas to search for an optimal HO-DBN structure. The performance of MMHO-DBN to GRN reconstruction was evaluated using both synthetic and real gene expression time-series data. Results show that MMHO-DBN is more accurate than current time-delayed GRN learning methods, and has an intermediate computing performance. Furthermore, it is able to learn long time-delayed relationships between genes. We applied sensitivity analysis on our model to study the performance variation along different parameter settings. The result provides hints on the setting of parameters of MMHO-DBN.

  8. Cogging effect minimization in PMSM position servo system using dual high-order periodic adaptive learning compensation.

    PubMed

    Luo, Ying; Chen, Yangquan; Pi, Youguo

    2010-10-01

    Cogging effect which can be treated as a type of position-dependent periodic disturbance, is a serious disadvantage of the permanent magnetic synchronous motor (PMSM). In this paper, based on a simulation system model of PMSM position servo control, the cogging force, viscous friction, and applied load in the real PMSM control system are considered and presented. A dual high-order periodic adaptive learning compensation (DHO-PALC) method is proposed to minimize the cogging effect on the PMSM position and velocity servo system. In this DHO-PALC scheme, more than one previous periods stored information of both the composite tracking error and the estimate of the cogging force is used for the control law updating. Asymptotical stability proof with the proposed DHO-PALC scheme is presented. Simulation is implemented on the PMSM servo system model to illustrate the proposed method. When the constant speed reference is applied, the DHO-PALC can achieve a faster learning convergence speed than the first-order periodic adaptive learning compensation (FO-PALC). Moreover, when the designed reference signal changes periodically, the proposed DHO-PALC can obtain not only faster convergence speed, but also much smaller final error bound than the FO-PALC.

  9. The mediodorsal thalamus as a higher order thalamic relay nucleus important for learning and decision-making.

    PubMed

    Mitchell, Anna S

    2015-07-01

    Recent evidence from monkey models of cognition shows that the magnocellular subdivision of the mediodorsal thalamus (MDmc) is more critical for learning new information than for retention of previously acquired information. Further, consistent evidence in animal models shows the mediodorsal thalamus (MD) contributes to adaptive decision-making. It is assumed that prefrontal cortex (PFC) and medial temporal lobes govern these cognitive processes so this evidence suggests that MD contributes a role in these cognitive processes too. Anatomically, the MD has extensive excitatory cortico-thalamo-cortical connections, especially with the PFC. MD also receives modulatory inputs from forebrain, midbrain and brainstem regions. It is suggested that the MD is a higher order thalamic relay of the PFC due to the dual cortico-thalamic inputs from layer V ('driver' inputs capable of transmitting a message) and layer VI ('modulator' inputs) of the PFC. Thus, the MD thalamic relay may support the transfer of information across the PFC via this indirect thalamic route. This review summarizes the current knowledge about the anatomy of MD as a higher order thalamic relay. It also reviews behavioral and electrophysiological studies in animals to consider how MD might support the transfer of information across the cortex during learning and decision-making. Current evidence suggests the MD is particularly important during rapid trial-by-trial associative learning and decision-making paradigms that involve multiple cognitive processes. Further studies need to consider the influence of the MD higher order relay to advance our knowledge about how the cortex processes higher order cognition.

  10. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    NASA Astrophysics Data System (ADS)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  11. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  12. 76 FR 15311 - Legacy Learning Systems, Inc.; Analysis of Proposed Consent Order To Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-21

    ... agreement's proposed order. The practices challenged in this case relate to the advertising of respondents... in articles, blog posts, or other online editorial copy that contained hyperlinks to respondents' Web... respondents, in connection with the advertising of any product or service, from misrepresenting the status...

  13. An Internet Dialogue: Mandatory Student Community Service, Court-Ordered Volunteering, and Service-Learning.

    ERIC Educational Resources Information Center

    Ellis, Susan; And Others

    1998-01-01

    Excerpts from an Internet debate identify issues and opinions on mandatory community service as a graduation requirement and court-ordered volunteering. The debate ranges over such topics as quality of the service experience, freedom of choice, intended outcomes, and values conflicts. (SK)

  14. Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.

    PubMed

    Alm, Magnus; Behne, Dawn

    2013-10-01

    Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.

  15. Your Most Essential Audiovisual Aid--Yourself!

    ERIC Educational Resources Information Center

    Hamp-Lyons, Elizabeth

    2012-01-01

    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…

  16. The influence of trial order on learning from reward vs. punishment in a probabilistic categorization task: experimental and computational analyses.

    PubMed

    Moustafa, Ahmed A; Gluck, Mark A; Herzallah, Mohammad M; Myers, Catherine E

    2015-01-01

    Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward

  17. Interlibrary loan of audiovisual materials in the health sciences: how a system operates in New Jersey.

    PubMed

    Crowley, C M

    1976-10-01

    An audiovisual loan program developed by the library of the College of Medicine and Dentistry of New Jersey is described. This program, supported by an NLM grant, has circulated audiovisual software from CMDNJ to libraries since 1974. Project experiences and statistics reflect the great demand for audiovisuals by health science libraries and demonstrate that a borrowing system following the pattern of traditional interlibrary loan can operate effectively and efficiently to serve these needs.

  18. Simulation of Parkinsonian gait by fusing trunk learned patterns and a lower limb first order model

    NASA Astrophysics Data System (ADS)

    Cárdenas, Luisa; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    Parkinson's disease is a neurodegenerative disorder that progressively affects the movement. Gait analysis is therefore crucial to determine a disease degree as well as to orient the diagnosis. However, gait examination is completely subjective and therefore prone to errors or misinterpretations, even with a great expertise. In addition, the conventional evaluation follows up general gait variables, which amounts to ignore subtle changes that definitely can modify the history of the treatment. This work presents a functional gait model that simulates the center of gravity trajectory (CoG) for different Parkinson disease stages. This model mimics the gait trajectory by coupling two models: a double pendulum (single stance phase) and a spring-mass model (double stance). Realistic simulations for different Parkinson disease stages are then obtained by integrating to the model a set of trunk bending patterns, learned from real patients. The proposed model was compared with the CoG of real Parkinson gaits in stages 2, 3, 4 achieving a correlation coefficient of 0.88, 0.92 and 0.86, respectively.

  19. Neural development of networks for audiovisual speech comprehension.

    PubMed

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L

    2010-08-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the neurobiological substrate in the child compares to the adult is unknown. In particular, developmental differences in the network for audiovisual speech comprehension could manifest through the incorporation of additional brain regions, or through different patterns of effective connectivity. In the present study we used functional magnetic resonance imaging and structural equation modeling (SEM) to characterize the developmental changes in network interactions for audiovisual speech comprehension. The brain response was recorded while children 8- to 11-years-old and adults passively listened to stories under audiovisual (AV) and auditory-only (A) conditions. Results showed that in children and adults, AV comprehension activated the same fronto-temporo-parietal network of regions known for their contribution to speech production and perception. However, the SEM network analysis revealed age-related differences in the functional interactions among these regions. In particular, the influence of the posterior inferior frontal gyrus/ventral premotor cortex on supramarginal gyrus differed across age groups during AV, but not A speech. This functional pathway might be important for relating motor and sensory information used by the listener to identify speech sounds. Further, its development might reflect changes in the mechanisms that relate visual speech information to articulatory speech representations through experience producing and perceiving speech.

  20. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or...-wide, clear captioning standards, procedures, and responsibilities. (e) Maintain current and...

  1. Psychometric testing of the Pecka Grading Rubric for evaluating higher-order thinking in distance learning.

    PubMed

    Pecka, Shannon; Schmid, Kendra; Pozehl, Bunny

    2014-12-01

    This article describes development of the Pecka Grading Rubric (PGR) as a strategy to facilitate and evaluate students' higher-order thinking in discussion boards. The purpose of this study was to describe psychometric properties of the PGR. Rubric reliability was pilot tested on a discussion board assignment used by 15 senior student registered nurse anesthetist enrolled in an Advanced Principles of Anesthesia course. Interrater and intrarater reliabilities were tested using an interclass correlation coefficient (ICC) to evaluate absolute agreement of scoring. Raters gave each category a score, scores of the categories were summed, and a total score was calculated for the entire rubric. Interrater (ICC = 0.939, P < .001) and intrarater (ICC = 0.902 to 0.994, P < .001) reliabilities were excellent for total point scores. A content validity index was used to evaluate content validity. Raters evaluated content validity of each cell of the PGR. The content validity index (0.8-1.0) was acceptable. Known-group validity was evaluated by comparing graduate student registered nurse anesthetists (N = 7) with undergraduate senior nursing students (N = 13). Beginning evidence indicates a valid and reliable instrument that measures higher-order thinking in the student registered nurse anesthetist.

  2. Primary and multisensory cortical activity is correlated with audiovisual percepts.

    PubMed

    Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven

    2010-04-01

    Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion.

  3. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  4. Audiovisual Delay as a Novel Cue to Visual Distance

    PubMed Central

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R.; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance. PMID:26509795

  5. Event Related Potentials Index Rapid Recalibration to Audiovisual Temporal Asynchrony

    PubMed Central

    Simon, David M.; Noel, Jean-Paul; Wallace, Mark T.

    2017-01-01

    Asynchronous arrival of multisensory information at the periphery is a ubiquitous property of signals in the natural environment due to differences in the propagation time of light and sound. Rapid adaptation to these asynchronies is crucial for the appropriate integration of these multisensory signals, which in turn is a fundamental neurobiological process in creating a coherent perceptual representation of our dynamic world. Indeed, multisensory temporal recalibration has been shown to occur at the single trial level, yet the mechanistic basis of this rapid adaptation is unknown. Here, we investigated the neural basis of rapid recalibration to audiovisual temporal asynchrony in human participants using a combination of psychophysics and electroencephalography (EEG). Consistent with previous reports, participant’s perception of audiovisual temporal synchrony on a given trial (t) was influenced by the temporal structure of stimuli on the previous trial (t−1). When examined physiologically, event related potentials (ERPs) were found to be modulated by the temporal structure of the previous trial, manifesting as late differences (>125 ms post second-stimulus onset) in central and parietal positivity on trials with large stimulus onset asynchronies (SOAs). These findings indicate that single trial adaptation to audiovisual temporal asynchrony is reflected in modulations of late evoked components that have previously been linked to stimulus evaluation and decision-making. PMID:28381993

  6. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    PubMed

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.

  7. Audiovisual integration of emotional signals from others' social interactions

    PubMed Central

    Piwek, Lukasz; Pollick, Frank; Petrini, Karin

    2015-01-01

    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity. PMID:26005430

  8. Audiovisual communication and therapeutic jurisprudence: Cognitive and social psychological dimensions.

    PubMed

    Feigenson, Neal

    2010-01-01

    The effects of audiovisual communications on the emotional and psychological well-being of participants in the legal system have not been previously examined. Using as a framework for analysis what Slobogin (1996) calls internal balancing (of therapeutic versus antitherapeutic effects) and external balancing (of therapeutic jurisprudence [TJ] effects versus effects on other legal values), this brief paper discusses three examples that suggest the complexity of evaluating courtroom audiovisuals in TJ terms. In each instance, audiovisual displays that are admissible based on their arguable probative or explanatory value - day-in-the-life movies, victim impact videos, and computer simulations of litigated events - might well reduce stress and thus improve the psychological well-being of personal injury plaintiffs, survivors, and jurors, respectively. In each situation, however, other emotional and cognitive effects may prove antitherapeutic for the target or other participants, and/or may undermine other important values including outcome accuracy, fairness, and even the conception of the legal decision maker as a moral actor.

  9. Informatics in radiology: evaluation of an e-learning platform for teaching medical students competency in ordering radiologic examinations.

    PubMed

    Marshall, Nina L; Spooner, Muirne; Galvin, P Leo; Ti, Joanna P; McElvaney, N Gerald; Lee, Michael J

    2011-01-01

    A preliminary audit of orders for computed tomography was performed to evaluate the typical performance of interns ordering radiologic examinations. According to the audit, the interns showed only minimal improvement after 8 months of work experience. The online radiology ordering module (ROM) program included baseline assessment of student performance (part I), online learning with the ROM (part II), and follow-up assessment of performance with simulated ordering with the ROM (part III). A curriculum blueprint determined the content of the ROM program, with an emphasis on practical issues, including provision of logistic information, clinical details, and safety-related information. Appropriate standards were developed by a committee of experts, and detailed scoring systems were devised for assessment. The ROM program was successful in addressing practical issues in a simulated setting. In the part I assessment, the mean score for noting contraindications for contrast media was 24%; this score increased to 59% in the part III assessment (P = .004). Similarly, notification of methicillin-resistant Staphylococcus aureus status and pregnancy status and provision of referring physician contact information improved significantly. The quality of the clinical notes was stable, with good initial scores. Part III testing showed overall improvement, with the mean score increasing from 61% to 76% (P < .0001). In general, medical students lack the core knowledge that is needed for good-quality ordering of radiology services, and the experience typically afforded to interns does not address this lack of knowledge. The ROM program was a successful intervention that resulted in statistically significant improvements in the quality of radiologic examination orders, particularly with regard to logistic and radiation safety issues.

  10. Creation and validation of web-based food allergy audiovisual educational materials for caregivers.

    PubMed

    Rosen, Jamie; Albin, Stephanie; Sicherer, Scott H

    2014-01-01

    Studies reveal deficits in caregivers' ability to prevent and treat food-allergic reactions with epinephrine and a consumer preference for validated educational materials in audiovisual formats. This study was designed to create brief, validated educational videos on food allergen avoidance and emergency management of anaphylaxis for caregivers of children with food allergy. The study used a stepwise iterative process including creation of a needs assessment survey consisting of 25 queries administered to caregivers and food allergy experts to identify curriculum content. Preliminary videos were drafted, reviewed, and revised based on knowledge and satisfaction surveys given to another cohort of caregivers and health care professionals. The final materials were tested for validation of their educational impact and user satisfaction using pre- and postknowledge tests and satisfaction surveys administered to a convenience sample of 50 caretakers who had not participated in the development stages. The needs assessment identified topics of importance including treatment of allergic reactions and food allergen avoidance. Caregivers in the final validation included mothers (76%), fathers (22%), and other caregivers (2%). Race/ethnicity were white (66%), black (12%), Asian (12%), Hispanic (8%), and other (2%). Knowledge tests (maximum score = 18) increased from a mean score of 12.4 preprogram to 16.7 postprogram (p < 0.0001). On a 7-point Likert scale, all satisfaction categories remained above a favorable mean score of 6, indicating participants were overall very satisfied, learned a lot, and found the materials to be informative, straightforward, helpful, and interesting. This web-based audiovisual curriculum on food allergy improved knowledge scores and was well received.

  11. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    PubMed Central

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  12. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    PubMed

    Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.

  13. Employing Transformative Learning Theory in the Design and Implementation of a Curriculum for Court-Ordered Participants in a Parent Education Class

    ERIC Educational Resources Information Center

    Taylor, Mariann B.; Hill, Lilian H.

    2016-01-01

    This study sought to analyze the experiences of participants in court-ordered parent education with the ultimate goal to identify a framework, which promotes learning that is transformative. Participants included 11 parents court ordered to attend parent education classes through the Department of Human Services. A basic qualitative design, which…

  14. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  15. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    ERIC Educational Resources Information Center

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  16. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  17. A-V Buyer's Guide. A User's Look At the Audio-Visual World.

    ERIC Educational Resources Information Center

    Laird, Dugan

    To enable a potential user to select the proper medium for his message, buy the appropriate equipment, and take proper care of it, this guide provides an accumulation of questions, answers, and tips on getting the most out of audiovisual buying decisions. Common questions about audiovisual equipment are discussed, along with more detailed…

  18. Audiovisual Between-Channel Redundancy and Its Effects upon Immediate Recall and Short-Term Memory.

    ERIC Educational Resources Information Center

    Hsia, H. J.

    In an attempt to ascertain the facilitating functions of audiovisual between-channel redundancy in information processing, a series of audiovisual experiments alternating auditory and visual as the dominant and redundant channels were conducted. As predicted, results generally supported the between-channel redundancy when input (stimulus) was…

  19. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    ERIC Educational Resources Information Center

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  20. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    ERIC Educational Resources Information Center

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  1. A Cross-Over Experimental Design for Testing Audiovisual Training Materials.

    ERIC Educational Resources Information Center

    Stolovitch, Harold D.; Bordeleau, Pierre

    This paper contains a description of the cross-over type of experimental design as well as a case study of its use in field testing audiovisual materials related to teaching handicapped children. Increased efficiency is an advantage of the cross-over design, while difficulty in selecting similar format audiovisual materials for field testing is a…

  2. How Children and Adults Produce and Perceive Uncertainty in Audiovisual Speech

    ERIC Educational Resources Information Center

    Krahmer, Emiel; Swerts, Marc

    2005-01-01

    We describe two experiments on signaling and detecting uncertainty in audiovisual speech by adults and children. In the first study, utterances from adult speakers and child speakers (aged 7-8) were elicited and annotated with a set of six audiovisual features. It was found that when adult speakers were uncertain they were more likely to produce…

  3. THE IROQUOIS, A BIBLIOGRAPHY OF AUDIO-VISUAL MATERIALS--WITH SUPPLEMENT. (TITLE SUPPLIED).

    ERIC Educational Resources Information Center

    KELLERHOUSE, KENNETH; AND OTHERS

    APPROXIMATELY 25 SOURCES OF AUDIOVISUAL MATERIALS PERTAINING TO THE IROQUOIS AND OTHER NORTHEASTERN AMERICAN INDIAN TRIBES ARE LISTED ACCORDING TO TYPE OF AUDIOVISUAL MEDIUM. AMONG THE LESS-COMMON MEDIA ARE RECORDINGS OF IROQUOIS MUSIC AND DO-IT-YOURSELF REPRODUCTIONS OF IROQUOIS ARTIFACTS. PRICES ARE GIVEN WHERE APPLICABLE. (BR)

  4. Importance of Audiovisual Instruction in the Associateship Diploma in Education in Nigerian Universities.

    ERIC Educational Resources Information Center

    Agun, Ibitayo

    1976-01-01

    The Associateship Diploma in Education is seen as an important means of promoting and encouraging the use of audiovisual aids in Nigerian primary schools. Objectives of audiovisual instruction, a course outline, and procedures for teaching the course are suggested, and use of aids in primary schools is surveyed. (Author/MLW)

  5. Planning Schools for Use of Audio-Visual Materials. No. 1--Classrooms, 3rd Edition.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    Intended to inform school board administrators and teachers of the current (1958) thinking on audio-visual instruction for use in planning new buildings, purchasing equipment, and planning instruction. Attention is given the problem of overcoming obstacles to the incorporation of audio-visual materials into the curriculum. Discussion includes--(1)…

  6. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or video media, from accidental or deliberate alteration or erasure. (c) If different versions of audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep...

  7. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or video media, from accidental or deliberate alteration or erasure. (c) If different versions of audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep...

  8. Audio-Visual Techniques for Industry. Development and Transfer of Technology Series No. 6.

    ERIC Educational Resources Information Center

    Halas, John; Martin-Harris, Roy

    Intended for use by persons in developing countries responsible for initiating or expanding the use of audiovisual facilities and techniques in industry, this manual is designed for those who have limited background in audiovisuals but need detailed information about how certain techniques may be employed in an economical, efficient way. Part one,…

  9. The Current Status of Audiovisual Definitions and Terminology: An International Perspective.

    ERIC Educational Resources Information Center

    Ely, Donald P.

    Because no published glossary of audiovisual terms has yet gained international currency, there is a need to: (1) explore international acceptance of a list of audiovisual terms and definitions; (2) review current efforts to do so; (3) propose criteria for acceptable terms and definitions; and (4) recommend procedures for acceptance of…

  10. A Team Approach to Developing an Audiovisual Single-Concept Instructional Unit.

    ERIC Educational Resources Information Center

    Brooke, Martha L.; And Others

    1974-01-01

    In 1973, the National Medical Audiovisual Center undertook the production of several audiovisual teaching units, each addressing a single-concept, using a team approach. The production team on the unit "Left Ventricle Catheterization" were a physiologist acting as content specialist, an artist and film producer as production specialist,…

  11. An Analysis of Audiovisual Machines for Individual Program Presentation. Research Memorandum Number Two.

    ERIC Educational Resources Information Center

    Finn, James D.; Weintraub, Royd

    The Medical Information Project (MIP) purpose to select the right type of audiovisual equipment for communicating new medical information to general practitioners of medicine was hampered by numerous difficulties. There is a lack of uniformity and standardization in audiovisual equipment that amounts to chaos. There is no evaluative literature on…

  12. The level of audiovisual print-speech integration deficits in dyslexia.

    PubMed

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  13. Emergent Patterns of Teaching/Learning in Electronic Classrooms.

    ERIC Educational Resources Information Center

    Shneiderman, Ben; Borkowski, Ellen Yu; Alavi, Maryam; Norman, Kent

    1998-01-01

    Describes the development and use of electronic classrooms at the University of Maryland College Park. Highlights include active individual learning; small group collaborative learning; class collaborative learning; classroom infrastructure; audio-visual support; courseware; empirical assessments; results of faculty surveys; and student feedback.…

  14. Heart House: Where Doctors Learn

    ERIC Educational Resources Information Center

    American School and University, 1978

    1978-01-01

    The new learning center and administrative headquarters of the American College of Cardiology in Bethesda, Maryland, contain a unique classroom equipped with the highly sophisticated audiovisual aids developed to teach the latest techniques in the diagnosis and treatment of heart disease. (Author/MLF)

  15. Musical expertise is related to altered functional connectivity during audiovisual integration

    PubMed Central

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo

    2015-01-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  16. Mobile Guide System Using Problem-Solving Strategy for Museum Learning: A Sequential Learning Behavioural Pattern Analysis

    ERIC Educational Resources Information Center

    Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.

    2010-01-01

    Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…

  17. Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts

    PubMed Central

    Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven

    2012-01-01

    Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040

  18. Visual Mislocalization of Moving Objects in an Audiovisual Event

    PubMed Central

    Kawachi, Yousuke

    2016-01-01

    The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD). In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance) were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects’ closest distance biased judgments toward “non-overlapping,” and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments) was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift). These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies. PMID:27111759

  19. Visual Mislocalization of Moving Objects in an Audiovisual Event.

    PubMed

    Kawachi, Yousuke

    2016-01-01

    The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD). In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance) were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects' closest distance biased judgments toward "non-overlapping," and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments) was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift). These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies.

  20. "Singing in the Tube"--audiovisual assay of plant oil repellent activity against mosquitoes (Culex pipiens).

    PubMed

    Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang

    2016-01-01

    Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses.

  1. Transfer of short-term motor learning across the lower limbs as a function of task conception and practice order.

    PubMed

    Stöckel, Tino; Wang, Jinsung

    2011-11-01

    Interlimb transfer of motor learning, indicating an improvement in performance with one limb following training with the other, often occurs asymmetrically (i.e., from non-dominant to dominant limb or vice versa, but not both). In the present study, we examined whether interlimb transfer of the same motor task could occur asymmetrically and in opposite directions (i.e., from right to left leg vs. left to right leg) depending on individuals' conception of the task. Two experimental conditions were tested: In a dynamic control condition, the process of learning was facilitated by providing the subjects with a type of information that forced them to focus on dynamic features of a given task (force impulse); and in a spatial control condition, it was done with another type of information that forced them to focus on visuomotor features of the same task (distance). Both conditions employed the same leg extension task. In addition, a fully-crossed transfer paradigm was used in which one group of subjects initially practiced with the right leg and were tested with the left leg for a transfer test, while the other group used the two legs in the opposite order. The results showed that the direction of interlimb transfer varied depending on the condition, such that the right and the left leg benefited from initial training with the opposite leg only in the spatial and the dynamic condition, respectively. Our finding suggests that manipulating the conception of a leg extension task has a substantial influence on the pattern of interlimb transfer in such a way that the direction of transfer can even be opposite depending on whether the task is conceived as a dynamic or spatial control task.

  2. Neural substrate for higher-order learning in an insect: Mushroom bodies are necessary for configural discriminations.

    PubMed

    Devaud, Jean-Marc; Papouin, Thomas; Carcaud, Julie; Sandoz, Jean-Christophe; Grünewald, Bernd; Giurfa, Martin

    2015-10-27

    Learning theories distinguish elemental from configural learning based on their different complexity. Although the former relies on simple and unambiguous links between the learned events, the latter deals with ambiguous discriminations in which conjunctive representations of events are learned as being different from their elements. In mammals, configural learning is mediated by brain areas that are either dispensable or partially involved in elemental learning. We studied whether the insect brain follows the same principles and addressed this question in the honey bee, the only insect in which configural learning has been demonstrated. We used a combination of conditioning protocols, disruption of neural activity, and optophysiological recording of olfactory circuits in the bee brain to determine whether mushroom bodies (MBs), brain structures that are essential for memory storage and retrieval, are equally necessary for configural and elemental olfactory learning. We show that bees with anesthetized MBs distinguish odors and learn elemental olfactory discriminations but not configural ones, such as positive and negative patterning. Inhibition of GABAergic signaling in the MB calyces, but not in the lobes, impairs patterning discrimination, thus suggesting a requirement of GABAergic feedback neurons from the lobes to the calyces for nonelemental learning. These results uncover a previously unidentified role for MBs besides memory storage and retrieval: namely, their implication in the acquisition of ambiguous discrimination problems. Thus, in insects as in mammals, specific brain regions are recruited when the ambiguity of learning tasks increases, a fact that reveals similarities in the neural processes underlying the elucidation of ambiguous tasks across species.

  3. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    PubMed Central

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  4. Depth Cues and Perceived Audiovisual Synchrony of Biological Motion

    PubMed Central

    Silva, Carlos César; Mendonça, Catarina; Mouta, Sandra; Silva, Rosa; Campos, José Creissac; Santos, Jorge

    2013-01-01

    Background Due to their different propagation times, visual and auditory signals from external events arrive at the human sensory receptors with a disparate delay. This delay consistently varies with distance, but, despite such variability, most events are perceived as synchronic. There is, however, contradictory data and claims regarding the existence of compensatory mechanisms for distance in simultaneity judgments. Principal Findings In this paper we have used familiar audiovisual events – a visual walker and footstep sounds – and manipulated the number of depth cues. In a simultaneity judgment task we presented a large range of stimulus onset asynchronies corresponding to distances of up to 35 meters. We found an effect of distance over the simultaneity estimates, with greater distances requiring larger stimulus onset asynchronies, and vision always leading. This effect was stronger when both visual and auditory cues were present but was interestingly not found when depth cues were impoverished. Significance These findings reveal that there should be an internal mechanism to compensate for audiovisual delays, which critically depends on the depth information available. PMID:24244617

  5. Head Tracking of Auditory, Visual, and Audio-Visual Targets

    PubMed Central

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2016-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual “bisensory” stimuli. Three metrics were measured—onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952

  6. Lexical and context effects in children's audiovisual speech recognition

    NASA Astrophysics Data System (ADS)

    Holt, Rachael; Kirk, Karen; Pisoni, David; Burckhartzmeyer, Lisa; Lin, Anna

    2005-09-01

    The Audiovisual Lexical Neighborhood Sentence Test (AVLNST), a new, recorded speech recognition test for children with sensory aids, was administered in multiple presentation modalities to children with normal hearing and vision. Each sentence consists of three key words whose lexical difficulty is controlled according to the Neighborhood Activation Model (NAM) of spoken word recognition. According to NAM, the recognition of spoken words is influenced by two lexical factors: the frequency of occurrence of individual words in a language, and how phonemically similar the target word is to other words in the listeners lexicon. These predictions are based on auditory similarity only, and thus do not take into account how visual information can influence the perception of speech. Data from the AVLNST, together with those from recorded audiovisual versions of isolated word recognition measures, the Lexical Neighborhood, and the Multisyllabic Lexical Neighborhood Tests, were used to examine the influence of visual information on speech perception in children. Further, the influence of top-down processing on speech recognition was examined by evaluating performance on the recognition of words in isolation versus words in sentences. [Work supported by the American Speech-Language-Hearing Foundation, the American Hearing Research Foundation, and the NIDCD, T32 DC00012 to Indiana University.

  7. Video genre categorization and representation using audio-visual information

    NASA Astrophysics Data System (ADS)

    Ionescu, Bogdan; Seyerlehner, Klaus; Rasche, Christoph; Vertan, Constantin; Lambert, Patrick

    2012-04-01

    We propose an audio-visual approach to video genre classification using content descriptors that exploit audio, color, temporal, and contour information. Audio information is extracted at block-level, which has the advantage of capturing local temporal information. At the temporal structure level, we consider action content in relation to human perception. Color perception is quantified using statistics of color distribution, elementary hues, color properties, and relationships between colors. Further, we compute statistics of contour geometry and relationships. The main contribution of our work lies in harnessing the descriptive power of the combination of these descriptors in genre classification. Validation was carried out on over 91 h of video footage encompassing 7 common video genres, yielding average precision and recall ratios of 87% to 100% and 77% to 100%, respectively, and an overall average correct classification of up to 97%. Also, experimental comparison as part of the MediaEval 2011 benchmarking campaign demonstrated the efficiency of the proposed audio-visual descriptors over other existing approaches. Finally, we discuss a 3-D video browsing platform that displays movies using feature-based coordinates and thus regroups them according to genre.

  8. Predictability affects the perception of audiovisual synchrony in complex sequences.

    PubMed

    Cook, Laura A; Van Valkenburg, David L; Badcock, David R

    2011-10-01

    The ability to make accurate audiovisual synchrony judgments is affected by the "complexity" of the stimuli: We are much better at making judgments when matching single beeps or flashes as opposed to video recordings of speech or music. In the present study, we investigated whether the predictability of sequences affects whether participants report that auditory and visual sequences appear to be temporally coincident. When we reduced their ability to predict both the next pitch in the sequence and the temporal pattern, we found that participants were increasingly likely to report that the audiovisual sequences were synchronous. However, when we manipulated pitch and temporal predictability independently, the same effect did not occur. By altering the temporal density (items per second) of the sequences, we further determined that the predictability effect occurred only in temporally dense sequences: If the sequences were slow, participants' responses did not change as a function of predictability. We propose that reduced predictability affects synchrony judgments by reducing the effective pitch and temporal acuity in perception of the sequences.

  9. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    PubMed

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder.

  10. Temporal Processing of Audiovisual Stimuli Is Enhanced in Musicians: Evidence from Magnetoencephalography (MEG)

    PubMed Central

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014

  11. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis

    PubMed Central

    Altieri, Nicholas; Wenger, Michael J.

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. PMID:24058358

  12. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    PubMed

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  13. Cognitive conflict in audiovisual integration: an event-related potential study.

    PubMed

    Yin, Qinqing; Qiu, Jiang; Zhang, Qinglin; Wen, Xiaohui

    2008-03-26

    This study used event-related potentials (ERPs) to investigate the electrophysiological correlates of cognitive conflict in audiovisual integration during an audiovisual task. ERP analyses revealed: (i) the anterior N1 and P1 were elicited in both matched and mismatched conditions and (ii) audiovisual mismatched answers elicited a more negative ERP deflection at 490 ms (N490) than matched answers. Dipole analysis of the difference wave (mismatched minus matched) localized the generator of the N490 to the posterior cingulate cortex, which may be involved in the control and modulation of conflict processing of Chinese characters when visual and auditory information is mismatched.

  14. Atypical audiovisual speech integration in infants at risk for autism.

    PubMed

    Guiraud, Jeanne A; Tomalski, Przemyslaw; Kushnerenko, Elena; Ribeiro, Helena; Davies, Kim; Charman, Tony; Elsabbagh, Mayada; Johnson, Mark H

    2012-01-01

    The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  15. Implicit Sequence Learning in Dyslexia: A Within-Sequence Comparison of First- and Higher-Order Information

    ERIC Educational Resources Information Center

    Du, Wenchong; Kelly, Steve W.

    2013-01-01

    The present study examines implicit sequence learning in adult dyslexics with a focus on comparing sequence transitions with different statistical complexities. Learning of a 12-item deterministic sequence was assessed in 12 dyslexic and 12 non-dyslexic university students. Both groups showed equivalent standard reaction time increments when the…

  16. Improving Individualized Educational Program (IEP) Mathematics Learning Goals for Conceptual Understanding of Order and Equivalence of Fractions

    ERIC Educational Resources Information Center

    Scanlon, Regina M.

    2013-01-01

    The purpose of this Executive Position Paper project was to develop resources for improving Individual Educational Program (IEP) mathematics learning goals for conceptual understanding of fractions for middle school special education students. The investigation surveyed how IEP mathematics learning goals are currently determined and proposed a new…

  17. A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word-Order Universal

    ERIC Educational Resources Information Center

    Culbertson, Jennifer; Smolensky, Paul

    2012-01-01

    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners' input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized…

  18. Psychometric Properties of the Epistemological Development in Teaching Learning Questionnaire (EDTLQ): An Inventory to Measure Higher Order Epistemological Development

    ERIC Educational Resources Information Center

    Kjellström, Sofia; Golino, Hudson; Hamer, Rebecca; Van Rossum, Erik Jan; Almers, Ellen

    2016-01-01

    Qualitative research supports a developmental dimension in views on teaching and learning, but there are currently no quantitative tools to measure the full range of this development. To address this, we developed the Epistemological Development in Teaching and Learning Questionnaire (EDTLQ). In the current study the psychometric properties of the…

  19. Infusing Higher-Order Thinking and Learning To Learn into Content Instruction: A Case Study of Secondary Computing Studies in Scotland.

    ERIC Educational Resources Information Center

    Kirkwood, Margaret

    2000-01-01

    Examines ideas on a thinking curriculum and "learning to learn" in the secondary education context. Explores computing studies at the Scottish secondary 3/4 level with 14- to 16-year-olds. Explains that this case study provides an example of the infusion approach being incorporated into the teaching of computer programming. (CMK)

  20. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation.

    PubMed

    Lusk, Laina G; Mitchel, Aaron D

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.

  1. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  2. Effects of virtual speaker density and room reverberation on spatiotemporal thresholds of audio-visual motion coherence.

    PubMed

    Sankaran, Narayan; Leung, Johahn; Carlile, Simon

    2014-01-01

    The present study examined the effects of spatial sound-source density and reverberation on the spatiotemporal window for audio-visual motion coherence. Three different acoustic stimuli were generated in Virtual Auditory Space: two acoustically "dry" stimuli via the measurement of anechoic head-related impulse responses recorded at either 1° or 5° spatial intervals (Experiment 1), and a reverberant stimulus rendered from binaural room impulse responses recorded at 5° intervals in situ in order to capture reverberant acoustics in addition to head-related cues (Experiment 2). A moving visual stimulus with invariant localization cues was generated by sequentially activating LED's along the same radial path as the virtual auditory motion. Stimuli were presented at 25°/s, 50°/s and 100°/s with a random spatial offset between audition and vision. In a 2AFC task, subjects made a judgment of the leading modality (auditory or visual). No significant differences were observed in the spatial threshold based on the point of subjective equivalence (PSE) or the slope of psychometric functions (β) across all three acoustic conditions. Additionally, both the PSE and β did not significantly differ across velocity, suggesting a fixed spatial window of audio-visual separation. Findings suggest that there was no loss in spatial information accompanying the reduction in spatial cues and reverberation levels tested, and establish a perceptual measure for assessing the veracity of motion generated from discrete locations and in echoic environments.

  3. Brain oscillations in switching vs. focusing audio-visual attention.

    PubMed

    Rapela, Joaquin; Gramann, Klaus; Westerfield, Marissa; Townsend, Jeanne; Makeig, Scott

    2012-01-01

    Selective attention contributes to perceptual efficiency by modulating cortical activity according to task demands. The majority of attentional research has focused on the effects of attention to a single modality, and little is known about the role of attention in multimodal sensory processing. Here we employ a novel experimental design to examine the electrophysiological basis of audio-visual attention shifting. We use electroencephalography (EEG) to study differences in brain dynamics between quickly shifting attention between modalities and focusing attention on a single modality for extended periods of time. We also address interactions between attentional effects generated by the attention-shifting cue and those generated by subsequent stimuli. The conclusions from these examinations address key issues in attentional research, including the supramodal theory of attention, or the role of attention in foveal vision. The experimental design and analysis methods used here may suggest new directions in the study of the physiological basis of attention.

  4. Audio-visual speech in noise perception in dyslexia.

    PubMed

    van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean

    2016-12-18

    Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip-read information that disambiguates noise-masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio-visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.

  5. The audiovisual temporal binding window narrows in early childhood.

    PubMed

    Lewkowicz, David J; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked whether the voice and face went together (Experiment 1) or whether the desynchronized videos differed from the synchronized one (Experiment 2). Four-year-olds detected the 666-ms asynchrony, 5-year-olds detected the 666- and 500-ms asynchrony, and 6-year-olds detected all asynchronies. These results show that the A-V temporal binding window narrows slowly during early childhood and that it is still wider at 6 years of age than in older children and adults.

  6. Audiovisual integration in noise by children and adults.

    PubMed

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G; Innes-Brown, Hamish; Shivdasani, Mohit N; Paolini, Antonio G

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-, 22-, 12-, and 9-dB across the different age groups were compared. Multisensory facilitation was greater in adults than in children, although performance for all age groups was affected by the presence of background noise. It is posited that changes in multisensory facilitation with increased auditory noise may be due to changes in attention bias.

  7. Effects of audio-visual stimulation on the incidence of restraint ulcers on the Wistar rat

    NASA Technical Reports Server (NTRS)

    Martin, M. S.; Martin, F.; Lambert, R.

    1979-01-01

    The role of sensory simulation in restrained rats was investigated. Both mixed audio-visual and pure sound stimuli, ineffective in themselves, were found to cause a significant increase in the incidence of restraint ulcers in the Wistar Rat.

  8. I can see, hear, and smell your fear: comparing olfactory and audiovisual media in fear communication.

    PubMed

    de Groot, Jasper H B; Semin, Gün R; Smeets, Monique A M

    2014-04-01

    Recent evidence suggests that humans can become fearful after exposure to olfactory fear signals, yet these studies have reported the effects of fear chemosignals without examining emotion-relevant input from traditional communication modalities (i.e., vision, audition). The question that we pursued here was therefore: How significant is an olfactory fear signal in the broader context of audiovisual input that either confirms or contradicts olfactory information? To test this, we manipulated olfactory (fear, no fear) and audiovisual (fear, no fear) information and demonstrated that olfactory fear signals were as potent as audiovisual fear signals in eliciting a fearful facial expression. Irrespective of confirmatory or contradictory audiovisual information, olfactory fear signals produced by senders induced fear in receivers outside of conscious access. These findings run counter to traditional views that emotions are communicated exclusively via visual and linguistic channels.

  9. The Use of Audio-Visual Media for the Education of Adults.

    ERIC Educational Resources Information Center

    Mathur, J. C.

    1978-01-01

    An Indian adult educator discusses the value of "pleasure-oriented" audiovisual adult education, the use of both commercial and subsidized films, television, and radio for their educational potential. He notes several production needs and techniques. (MF)

  10. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Photographic film and prints. The requirements in this paragraph apply to permanent, long-term temporary, and unscheduled audiovisual records. (1) General guidance. Keep all film in cold storage following guidance by...

  11. Contextual control of audiovisual integration in low-level sensory cortices.

    PubMed

    van Atteveldt, Nienke M; Peterson, Bradley S; Schroeder, Charles E

    2014-05-01

    Potential sources of multisensory influences on low-level sensory cortices include direct projections from sensory cortices of different modalities, as well as more indirect feedback inputs from higher order multisensory cortical regions. These multiple architectures may be functionally complementary, but the exact roles and inter-relationships of the circuits are unknown. Using a fully balanced context manipulation, we tested the hypotheses that: (1) feedforward and lateral pathways subserve speed functions, such as detecting peripheral stimuli. Multisensory integration effects in this context are predicted in peripheral fields of low-level sensory cortices. (2) Slower feedback pathways underpin accuracy functions, such as object discrimination. Integration effects in this context are predicted in higher-order association cortices and central/foveal fields of low-level sensory cortex. We used functional magnetic resonance imaging to compare the effects of central versus peripheral stimulation on audiovisual integration, while varying speed and accuracy requirements for behavioral responses. We found that interactions of task demands and stimulus eccentricity in low-level sensory cortices are more complex than would be predicted by a simple dichotomy such as our hypothesized peripheral/speed and foveal/accuracy functions. Additionally, our findings point to individual differences in integration that may be related to skills and strategy. Overall, our findings suggest that instead of using fixed, specialized pathways, the exact circuits and mechanisms that are used for low-level multisensory integration are much more flexible and contingent upon both individual and contextual factors than previously assumed.

  12. Seeing to hear better: evidence for early audio-visual interactions in speech identification.

    PubMed

    Schwartz, Jean-Luc; Berthommier, Frédéric; Savariaux, Christophe

    2004-09-01

    Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception.

  13. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    PubMed

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  14. Theory and Practice: How Filming "Learning in the Real World" Helps Students Make the Connection

    ERIC Educational Resources Information Center

    Commander, Nannette Evans; Ward, Teresa E.; Zabrucky, Karen M.

    2012-01-01

    This article describes an assignment, titled "Learning in the Real World," designed for graduate students in a learning theory course. Students work in small groups to create high quality audio-visual films that present "real learning" through interviews and/or observations of learners. Students select topics relevant to theories we are discussing…

  15. Using Experiential Learning to Teach Evaluation Skills.

    ERIC Educational Resources Information Center

    Wulff-Risner, Linda; Stewart, Bob

    1997-01-01

    Of 98 8-18 year olds, 47 were taught livestock evaluation skills (conformance and performance) using live horses and 51 using video simulation. There were no significant differences related to teaching technique. Older students (12-18) learned conformance judging skills more quickly than younger ones. Audiovisual aids were considered effective for…

  16. Social Studies: K-9 Supplementary Learning Resources.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg. Curriculum Development Branch.

    This annotated bibliography contains approximately 350 citations of learning resources for the series of K-9 guides designed for the social studies curriculum in Manitoba, Canada (SO 014 225-231). Intended for teachers and students, the bibliography includes listings of guides, manuals, books, booklets, filmstrips, audiovisual kits, cassettes,…

  17. Computers as a Language Learning Tool.

    ERIC Educational Resources Information Center

    Ruschoff, Bernd

    1984-01-01

    Describes a computer-assisted language learning project at the University of Wuppertal (West Germany). It's hoped that teachers can overcome the two handicaps of the past--lack of teacher awareness of current audio-visual technical aids, as well as unsophisticated computer hardware--both problems by getting the opportunity to familiarize…

  18. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    PubMed

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing.

  19. The duration of uncertain times: audiovisual information about intervals is integrated in a statistically optimal fashion.

    PubMed

    Hartcher-O'Brien, Jess; Di Luca, Massimiliano; Ernst, Marc O

    2014-01-01

    Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme isstatistically optimal because it theoretically results in unbiased perceptual estimates with the highest precisionpossible.There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time.In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimationof audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies - i.e. consideringthe time of onset/offset ofsignals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates.

  20. The Use of System Thinking Concepts in Order to Assure Continuous Improvement of Project Based Learning Courses

    ERIC Educational Resources Information Center

    Arantes do Amaral, Joao Alberto; Gonçalves, Paulo

    2015-01-01

    This case study describes a continuous improvement experience, conducted from 2002 to 2014 in Sao Paulo, Brazil, within 47 Project-Based Learning MBA courses, involving approximately 1,400 students. The experience report will focus on four themes: (1) understanding the main dynamics present in MBA courses; (2) planning a systemic intervention in…

  1. An Evaluation of the Network Efficiency Required in Order to Support Multicast and Synchronous Distributed Learning Network Traffic

    DTIC Science & Technology

    2003-09-01

    Learning, Network Protocol, PIM, DVMRP, IGMP , SAP/SDP, IGMP Snooping, Dense Mode, Sparse Mode 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...Announcement Protocol (SAP) ..........28 2. Session Description Protocol (SDP) ...........30 B. INTERNET GROUP MANAGEMENT PROTOCOL ( IGMP ) .........31 1...38 4. IGMP Snooping ................................40 C. DISTANCE VECTOR MULTICASTING ROUTING PROTOCOL (DVMRP

  2. Applying Distributed Cognition Theory to the Redesign of the "Copy and Paste" Function in Order to Promote Appropriate Learning Outcomes

    ERIC Educational Resources Information Center

    Morgan, Michael; Brickell, Gwyn; Harper, Barry

    2008-01-01

    This paper explores the application of distributed cognition theory to educational contexts by examining a common learning interaction, the "Copy and Paste" function. After a discussion of distributed cognition and the role of mediating artefacts in real world cognitions, the "Copy and Paste" function is redesigned to embed an effective…

  3. Transfer of Short-Term Motor Learning across the Lower Limbs as a Function of Task Conception and Practice Order

    ERIC Educational Resources Information Center

    Stockel, Tino; Wang, Jinsung

    2011-01-01

    Interlimb transfer of motor learning, indicating an improvement in performance with one limb following training with the other, often occurs asymmetrically (i.e., from non-dominant to dominant limb or vice versa, but not both). In the present study, we examined whether interlimb transfer of the same motor task could occur asymmetrically and in…

  4. Putting Order into Our Universe: The Concept of "Blended Learning"--A Methodology within the Concept-Based Terminology Framework

    ERIC Educational Resources Information Center

    Fernandes, Joana; Costa, Rute; Peres, Paula

    2016-01-01

    This paper aims at discussing the advantages of a methodology design grounded on a concept-based approach to Terminology applied to the most prominent scenario of current Higher Education: "blended learning." Terminology is a discipline that aims at representing, describing and defining specialized knowledge through language, putting…

  5. Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension.

    PubMed

    Lee, HweeLing; Noppeney, Uta

    2011-08-03

    Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.

  6. Can personality traits predict pathological responses to audiovisual stimulation?

    PubMed

    Yambe, Tomoyuki; Yoshizawa, Makoto; Fukudo, Shin; Fukuda, Hiroshi; Kawashima, Ryuta; Shizuka, Kazuhiko; Nanka, Shunsuke; Tanaka, Akira; Abe, Ken-ichi; Shouji, Tomonori; Hongo, Michio; Tabayashi, Kouichi; Nitta, Shin-ichi

    2003-10-01

    pathophysiological reaction to the audiovisual stimulations. As for the photo sensitive epilepsy, it was reported to be only 5-10% for all patients. Therefore, 90% or more of the cause could not be determined in patients who started a morbid response. The results in this study suggest that the autonomic function was connected to the mental tendency of the objects. By examining such directivity, it is expected that subjects, which show morbid reaction to an audiovisual stimulation, can be screened beforehand.

  7. Learning.

    ERIC Educational Resources Information Center

    Glaser, Robert

    A report on learning psychology and its relationship to the study of school learning emphasizes the increasing interaction between theorists and educational practitioners, particularly in attempting to learn which variables influence the instructional process and to find an appropriate methodology to measure and evaluate learning. "Learning…

  8. The Audio-Visual Services in Fifteen African Countries. Comparative Study on the Administration of Audio-Visual Services in Advanced and Developing Countries. Part Four. First Edition.

    ERIC Educational Resources Information Center

    Jongbloed, Harry J. L.

    As the fourth part of a comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded study reports on the African countries of Cameroun, Republic of Central Africa, Dahomey, Gabon, Ghana, Kenya, Libya, Mali, Nigeria, Rwanda, Senegal, Swaziland, Tunisia, Upper Volta and Zambia. Information…

  9. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    SciTech Connect

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J. . E-mail: pjkeall@vcu.edu

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.

  10. Audiovisual speech perception development at varying levels of perceptual processing

    PubMed Central

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  11. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  12. Audio-visual enhancement of speech in noise.

    PubMed

    Girin, L; Schwartz, J L; Feng, G

    2001-06-01

    A key problem for telecommunication or human-machine communication systems concerns speech enhancement in noise. In this domain, a certain number of techniques exist, all of them based on an acoustic-only approach--that is, the processing of the audio corrupted signal using audio information (from the corrupted signal only or additive audio information). In this paper, an audio-visual approach to the problem is considered, since it has been demonstrated in several studies that viewing the speaker's face improves message intelligibility, especially in noisy environments. A speech enhancement prototype system that takes advantage of visual inputs is developed. A filtering process approach is proposed that uses enhancement filters estimated with the help of lip shape information. The estimation process is based on linear regression or simple neural networks using a training corpus. A set of experiments assessed by Gaussian classification and perceptual tests demonstrates that it is indeed possible to enhance simple stimuli (vowel-plosive-vowel sequences) embedded in white Gaussian noise.

  13. Audio-visual perception system for a humanoid robotic head.

    PubMed

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-05-28

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  14. Hearing flashes and seeing beeps: Timing audiovisual events

    PubMed Central

    2017-01-01

    Many events from daily life are audiovisual (AV). Handclaps produce both visual and acoustic signals that are transmitted in air and processed by our sensory systems at different speeds, reaching the brain multisensory integration areas at different moments. Signals must somehow be associated in time to correctly perceive synchrony. This project aims at quantifying the mutual temporal attraction between senses and characterizing the different interaction modes depending on the offset. In every trial participants saw four beep-flash pairs regularly spaced in time, followed after a variable delay by a fifth event in the test modality (auditory or visual). A large range of AV offsets was tested. The task was to judge whether the last event came before/after what was expected given the perceived rhythm, while attending only to the test modality. Flashes were perceptually shifted in time toward beeps, the attraction being stronger for lagging than leading beeps. Conversely, beeps were not shifted toward flashes, indicating a nearly total auditory capture. The subjective timing of the visual component resulting from the AV interaction could easily be forward but not backward in time, an intuitive constraint stemming from minimum visual processing delays. Finally, matching auditory and visual time-sensitivity with beeps embedded in pink noise produced very similar mutual attractions of beeps and flashes. Breaking the natural auditory preference for timing allowed vision to take over as well, showing that this preference is not hardwired. PMID:28207786

  15. Sensorimotor synchronization with audio-visual stimuli: limited multisensory integration.

    PubMed

    Armstrong, Alan; Issartel, Johann

    2014-11-01

    Understanding how we synchronize our actions with stimuli from different sensory modalities plays a central role in helping to establish how we interact with our multisensory environment. Recent research has shown better performance with multisensory over unisensory stimuli; however, the type of stimuli used has mainly been auditory and tactile. The aim of this article was to expand our understanding of sensorimotor synchronization with multisensory audio-visual stimuli and compare these findings to their individual unisensory counterparts. This research also aims to assess the role of spatio-temporal structure for each sensory modality. The visual and/or auditory stimuli had either temporal or spatio-temporal information available and were presented to the participants in unimodal and bimodal conditions. Globally, the performance was significantly better for the bimodal compared to the unimodal conditions; however, this benefit was limited to only one of the bimodal conditions. In terms of the unimodal conditions, the level of synchronization with visual stimuli was better than auditory, and while there was an observed benefit with the spatio-temporal compared to temporal visual stimulus, this was not replicated with the auditory stimulus.

  16. Impact of language on functional connectivity for audiovisual speech integration

    PubMed Central

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  17. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  18. Neurofunctional underpinnings of audiovisual emotion processing in teens with autism spectrum disorders.

    PubMed

    Doyle-Thomas, Krissy A R; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B C

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system.

  19. Spatial and temporal factors during processing of audiovisual speech: a PET study.

    PubMed

    Macaluso, E; George, N; Dolan, R; Spence, C; Driver, J

    2004-02-01

    Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.

  20. Neurofunctional Underpinnings of Audiovisual Emotion Processing in Teens with Autism Spectrum Disorders

    PubMed Central

    Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139

  1. Bibliographic control of audiovisuals: analysis of a cataloging project using OCLC.

    PubMed

    Curtis, J A; Davison, F M

    1985-04-01

    The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal.

  2. Do we need to overcome barriers to learning in the workplace for foundation trainees rotating in neurosurgery in order to improve training satisfaction?

    PubMed

    Phan, Pho Nh; Patel, Keyur; Bhavsar, Amar; Acharya, Vikas

    2016-01-01

    Junior doctors go through a challenging transition upon qualification; this repeats every time they start a rotation in a new department. Foundation level doctors (first 2 years postqualification) in neurosurgery are often new to the specialty and face various challenges that may result in significant workplace dissatisfaction. The neurosurgical environment is a clinically demanding area with a high volume of unwell patients and frequent emergencies - this poses various barriers to learning in the workplace for junior doctors. We identify a number of key barriers and review ideas that can be trialed in the department to overcome them. Through an evaluation of current suggestions in the literature, we propose that learning opportunities need to be made explicit to junior doctors in order to encourage them to participate as a member of the team. We consider ideas for adjustments to the induction program and the postgraduate medical curriculum to shift the focus from medical knowledge to improving confidence and clinical skills in newly qualified doctors. Despite being a powerful window for opportunistic learning, the daily ward round is unfortunately not maximized and needs to be more learner focused while maintaining efficiency and time consumption. Finally, we put forward the idea of an open forum where trainees can talk about their learning experiences, identify subjective barriers, and suggest solutions to senior doctors. This would be achieved through departmental faculty development. These interventions are presented within the context of the neurosurgical ward; however, they are transferable and can be adapted in other specialties and departments.

  3. Audiovisuelle Materialien fur den modernen Fremdsprachenunterricht. Stand: Juni 1971 (Audiovisual Materials for Modern Foreign Language Instruction. June 1971 Edition).

    ERIC Educational Resources Information Center

    Informationszentrum fuer Fremdsprachenforschung, Marburg (West Germany).

    This listing, updating the 1969 publication, cites commerically available language instruction programs having audiovisual components. Supplementary audiovisual aids are available only as components of total programs noted in this work. Organization is by language and commerical source. Indications for classroom applications and prices (in German…

  4. Audiovisuelle Materialien fur den modernen Fremdsprachenunterricht. Stand: August 1969 (Audiovisual Materials for Modern Foreign Language Instruction. August 1969 Edition).

    ERIC Educational Resources Information Center

    Mohr, Peter, Comp.

    This listing cites commercially available programs for foreign language instruction which have audiovisual components. Supplementary audiovisual aids without accompanying basic text materials are not included. Organization is by language and commercial source. Indications for classroom application and prices (in German currency) are provided. The…

  5. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    ERIC Educational Resources Information Center

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  6. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or may contain copyrighted material are provided to you if you seek the release of such materials in...

  7. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or may contain copyrighted material are provided to you if you seek the release of such materials in...

  8. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or may contain copyrighted material are provided to you if you seek the release of such materials in...

  9. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    PubMed

    Alm, Magnus; Behne, Dawn

    2015-01-01

    Gender and age have been found to affect adults' audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years) and middle-aged adults (50-60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females' general AV perceptual strategy. Although young females' speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses.

  10. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode.

  11. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.

  12. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    PubMed Central

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a

  13. Audiovisual correspondence between musical timbre and visual shapes.

    PubMed

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  14. The Development of Audio-Visual Integration for Temporal Judgements.

    PubMed

    Adams, Wendy J

    2016-04-01

    Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4-11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality-they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or auditory

  15. AUDIOVISUAL RESOURCES ON THE TEACHING PROCESS IN SURGICAL TECHNIQUE

    PubMed Central

    PUPULIM, Guilherme Luiz Lenzi; IORIS, Rafael Augusto; GAMA, Ricardo Ribeiro; RIBAS, Carmen Australia Paredes Marcondes; MALAFAIA, Osvaldo; GAMA, Mirnaluci

    2015-01-01

    Background: The development of didactic means to create opportunities to permit complete and repetitive viewing of surgical procedures is of great importance nowadays due to the increasing difficulty of doing in vivo training. Thus, audiovisual resources favor the maximization of living resources used in education, and minimize problems arising only with verbalism. Aim: To evaluate the use of digital video as a pedagogical strategy in surgical technique teaching in medical education. Methods: Cross-sectional study with 48 students of the third year of medicine, when studying in the surgical technique discipline. They were divided into two groups with 12 in pairs, both subject to the conventional method of teaching, and one of them also exposed to alternative method (video) showing the technical details. All students did phlebotomy in the experimental laboratory, with evaluation and assistance of the teacher/monitor while running. Finally, they answered a self-administered questionnaire related to teaching method when performing the operation. Results: Most of those who did not watch the video took longer time to execute the procedure, did more questions and needed more faculty assistance. The total exposed to video followed the chronology of implementation and approved the new method; 95.83% felt able to repeat the procedure by themselves, and 62.5% of those students that only had the conventional method reported having regular capacity of technique assimilation. In both groups mentioned having regular difficulty, but those who have not seen the video had more difficulty in performing the technique. Conclusion: The traditional method of teaching associated with the video favored the ability to understand and transmitted safety, particularly because it is activity that requires technical skill. The technique with video visualization motivated and arouse interest, facilitated the understanding and memorization of the steps for procedure implementation, benefiting the

  16. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.

    PubMed

    Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial

  17. Audiovisual correspondence between musical timbre and visual shapes

    PubMed Central

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre. PMID:24910604

  18. Learning about the Unfairgrounds: A 4th-Grade Teacher Introduces Her Students to Executive Order 9066

    ERIC Educational Resources Information Center

    Baydo-Reed, Katie

    2010-01-01

    Following the bombing of Pearl Harbor on Dec. 7, 1941, U.S. officials issued a series of proclamations that violated the civil and human rights of the vast majority of Japanese Americans in the United States--ostensibly to protect the nation from further Japanese aggression. The proclamations culminated in Executive Order 9066, which gave the…

  19. A Methodological Approach to Support Collaborative Media Creation in an E-Learning Higher Education Context

    ERIC Educational Resources Information Center

    Ornellas, Adriana; Muñoz Carril, Pablo César

    2014-01-01

    This article outlines a methodological approach to the creation, production and dissemination of online collaborative audio-visual projects, using new social learning technologies and open-source video tools, which can be applied to any e-learning environment in higher education. The methodology was developed and used to design a course in the…

  20. Homebound Learning Opportunities: Reaching Out to Older Shut-ins and Their Caregivers.

    ERIC Educational Resources Information Center

    Penning, Margaret; Wasyliw, Douglas

    1992-01-01

    Describes Homebound Learning Opportunities, innovative health promotion and educational outreach service for homebound older adults and their caregivers. Notes that program provides over 125 topics for individualized learning programs delivered to participants in homes, audiovisual lending library, educational television programing, and peer…

  1. Looking Back--A Lesson Learned: From Videotape to Digital Media

    ERIC Educational Resources Information Center

    Lys, Franziska

    2010-01-01

    This paper chronicles the development of Drehort Neubrandenburg Online, an interactive, content-rich audiovisual language learning environment based on documentary film material shot on location in Neubrandenburg, Germany, in 1991 and 2002 and aimed at making language learning more interactive and more real. The paper starts with the description…

  2. Individually-Paced Learning in Civil Engineering Technology: An Approach to Mastery.

    ERIC Educational Resources Information Center

    Sharples, D. Kent; And Others

    An individually-paced, open-entry/open-ended mastery learning approach for a state-wide civil engineering technology curriculum was developed, field-tested, and evaluated. Learning modules relying heavily on audiovisuals and hands-on experience, and based on 163 identified competencies, were developed for 11 courses in the curriculum. Written…

  3. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging

    NASA Astrophysics Data System (ADS)

    Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.

  4. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-05

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1).

  5. Audiovisual speech integration in autism spectrum disorders: ERP evidence for atypicalities in lexical-semantic processing.

    PubMed

    Megnin, Odette; Flitton, Atlanta; Jones, Catherine R G; de Haan, Michelle; Baldeweg, Torsten; Charman, Tony

    2012-02-01

    In typically developing (TD) individuals, behavioral and event-related potential (ERP) studies suggest that audiovisual (AV) integration enables faster and more efficient processing of speech. However, little is known about AV speech processing in individuals with autism spectrum disorders (ASD). This study examined ERP responses to spoken words to elucidate the effects of visual speech (the lip movements accompanying a spoken word) on the range of auditory speech processing stages from sound onset detection to semantic integration. The study also included an AV condition, which paired spoken words with a dynamic scrambled face in order to highlight AV effects specific to visual speech. Fourteen adolescent boys with ASD (15-17 years old) and 14 age- and verbal IQ-matched TD boys participated. The ERP of the TD group showed a pattern and topography of AV interaction effects consistent with activity within the superior temporal plane, with two dissociable effects over frontocentral and centroparietal regions. The posterior effect (200-300 ms interval) was specifically sensitive to lip movements in TD boys, and no AV modulation was observed in this region for the ASD group. Moreover, the magnitude of the posterior AV effect to visual speech correlated inversely with ASD symptomatology. In addition, the ASD boys showed an unexpected effect (P2 time window) over the frontocentral region (pooled electrodes F3, Fz, F4, FC1, FC2, FC3, FC4), which was sensitive to scrambled face stimuli. These results suggest that the neural networks facilitating processing of spoken words by visual speech are altered in individuals with ASD.

  6. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    PubMed

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  7. Teaching and Learning with Hypervideo in Vocational Education and Training

    ERIC Educational Resources Information Center

    Cattaneo, Alberto A. P.; Nguyen, Anh Thu; Aprea, Carmela

    2016-01-01

    Audiovisuals offer increasing opportunities as teaching-and-learning materials while also confronting educators with significant challenges. Hypervideo provides one means of overcoming these challenges, offering new possibilities for interaction and support for reflective processes. However, few studies have investigated the instructional…

  8. Resource Based Learning: An Experience in Planning and Production.

    ERIC Educational Resources Information Center

    McAleese, Ray; Scobbie, John

    A 2-year project at the University of Aberdeen focused on the production of learning materials and the planning of audiovisual based instruction. Background information on the project examines its origins, the nature of course teams, and the evaluation of the five text-tape programs produced. The report specifies three project aims: (1) to produce…

  9. Learning Disabilities and the Auditory and Visual Matching Computer Program

    ERIC Educational Resources Information Center

    Tormanen, Minna R. K.; Takala, Marjatta; Sajaniemi, Nina

    2008-01-01

    This study examined whether audiovisual computer training without linguistic material had a remedial effect on different learning disabilities, like dyslexia and ADD (Attention Deficit Disorder). This study applied a pre-test-intervention-post-test design with students (N = 62) between the ages of 7 and 19. The computer training lasted eight weeks…

  10. Project Report ECLIPSE: European Citizenship Learning Program for Secondary Education

    ERIC Educational Resources Information Center

    Bombardelli, Olga

    2014-01-01

    This paper reports on a European project, the Comenius ECLIPSE project (European Citizenship Learning in a Programme for Secondary Education) developed by six European partners coordinated by the University of Trento in the years 2011-2014. ECLIPSE (co-financed by the EACEA--Education, Audiovisual and Culture Executive Agency) aims at developing,…

  11. STUDIES RELATED TO THE DESIGN OF AUDIOVISUAL TEACHING MATERIALS.

    ERIC Educational Resources Information Center

    TRAVERS, ROBERT M.W.

    AN INFORMATION TRANSMISSION MODEL THAT ADVOCATES LEARNING VIA ONLY ONE SENSE MODALITY (E.G. VISUAL) IS THE BASIS FOR SEVERAL SERIES OF EXPERIMENTS, EACH SUBJECTED TO RIGOROUS STATISTICAL ANALYSIS. CONCLUSIONS ARE--LEARNING IS NOT FACILITATED BY REDUNDANT INFORMATION PRESENTED SIMULTANEOUSLY THROUGH THE AUDITORY AND VISUAL SENSE MODALITIES, IT IS…

  12. Hotel and Restaurant Management; A Bibliography of Books and Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Malkames, James P.; And Others

    This bibliography represents a collection of 1,300 book volumes and audiovisual materials collected by the Luzerne County Community College Library in support of the college's Hotel and Restaurant Management curriculum. It covers such diverse topics as advertising, business practices, decoration, nutrition, hotel law, insurance landscaping, health…

  13. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    ERIC Educational Resources Information Center

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  14. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or video media, from accidental or deliberate alteration or erasure. (c) If different versions of... records (e.g., for digital files, use file naming conventions), that clarify connections between...

  15. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music

    PubMed Central

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech. PMID:25147539

  16. Audiovisuals for Nutrition Education; Selected Evaluative Reviews from the Journal of Nutrition Education.

    ERIC Educational Resources Information Center

    Rowe, Sue Ellen, Comp.

    Audiovisual materials suitable for the teaching of nutrition are listed. Materials include coloring books, flannelboard stories, games, kits, audiotapes, records, charts, posters, study prints, films, videotapes, filmstrips, slides, and transparencies. Each entry contains bibliographic data, educational level, price and evaluation. Mateiral is…

  17. Audiovisuals for Nutrition Education. Nutrition Education Resource Series No. 9. Revised Edition.

    ERIC Educational Resources Information Center

    National Nutrition Education Clearing House, Berkeley, CA.

    This bibliography contains reviews of more than 250 audiovisual materials in eight subject areas related to nutrition: (1) general nutrition; (2) life cycle; (3) diet/health and disease; (4) health and athletics; (5) food - general; (6) food preparation and service; (7) food habits and preferences; and (8) food economics and concerns. Materials…

  18. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2017-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529

  19. Audiovisual Records in the National Archives Relating to Black History. Preliminary Draft.

    ERIC Educational Resources Information Center

    Waffen, Leslie; And Others

    A representative selection of the National Archives and Records Services' audiovisual collection relating to black history is presented. The intention is not to provide an exhaustive survey, but rather to indicate the breadth and scope of materials available for study and to suggest areas for concentrated research. The materials include sound…

  20. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  1. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    ERIC Educational Resources Information Center

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  2. A Study to Formulate Quantitative Guidelines for the Audio-Visual Communications Field. Final Report.

    ERIC Educational Resources Information Center

    Faris, Gene; Sherman, Mendel

    Quantitative guidelines for use in determining the audiovisual (AV) needs of educational institutions were developed by the Octobe r 14-16, 1965 Seminar of the NDEA (National Defense Education Act), Faris-Sherman study. The guidelines that emerged were based in part on a review of past efforts and existing standards but primarily reflected the…

  3. Audiovisual Media in the Dutch Public Library. AV in Action 2.

    ERIC Educational Resources Information Center

    Spruit, Ed.

    Based mainly on the experiences of seven Dutch public libraries during a 5-year study, this report discusses issues related to the integration of audiovisual materials and equipment into existing library collections. Following a brief introduction, its contents are divided into chapters: (1) Collection Building; (2) Processing; (3) Cataloguing;…

  4. Storage, Handling and Preservation of Audiovisual Materials. AV in Action 3.

    ERIC Educational Resources Information Center

    Thompson, Anthony Hugh

    Designed to provide the librarian with suggestions and guidelines for storing and preserving audiovisual materials, this pamphlet is divided into four major chapters: (1) Normal Use Storage Conditions; (2) Natural Lifetime, Working Lifetime and Long-Term Storage; (3) Handling; and (4) Shelving of Normal Use Materials. Topics addressed include:…

  5. QUANTITATIVE STANDARDS FOR AUDIOVISUAL PERSONNEL, EQUIPMENT AND MATERIALS (IN ELEMENTARY, SECONDARY, AND HIGHER EDUCATION).

    ERIC Educational Resources Information Center

    COBUN, TED; AND OTHERS

    THIS DOCUMENT IS A STAGE IN A STUDY TO FORMULATE QUANTITATIVE GUIDELINES FOR THE AUDIO-VISUAL COMMUNICATIONS FIELD, BEING CONDUCTED BY DOCTORS GENE FARIS AND MENDEL SHERMAN UNDER A NATIONAL DEFENSE EDUCATION ACT CONTRACT. THE STANDARDS LISTED HERE HAVE BEEN OFFICIALLY APPROVED AND ADOPTED BY SEVERAL AGENCIES, INCLUDING THE DEPARTMENT OF…

  6. An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.

    ERIC Educational Resources Information Center

    Albert, Richard N.

    Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…

  7. Audiovisual Scripts for CPSS. Research & Development Series No. 119-K. Career Planning Support System.

    ERIC Educational Resources Information Center

    Lowry, Cheryl Meredith; And Others

    Transcripts for each of four audiovisual presentations, components of the Career Planning Support System (CPSS), are contained in this package. (CPSS is a comprehensive guidance program management system designed to provide information for local high schools to design, implement, and evaluate an upgraded career guidance program. CPSS describes how…

  8. Use of Audio-Visual Cassette Tapes and Instructional Modules in Teaching Graphics.

    ERIC Educational Resources Information Center

    Mercier, Cletus R.

    This paper describes the adaptation of instructional modules and audio-visual tapes for an engineering graphics course to a course entitled "Technical Drawing for Applied Art." This is a service course taught by the engineering department for the applied art department in the College of Home Economics. A separate problem package was utilized with…

  9. Nutrition Education Printed Materials and Audiovisuals: Grades Preschool-6, January 1979-May 1990. Quick Bibliography Series.

    ERIC Educational Resources Information Center

    Evans, Shirley King

    This annotated bibliography contains 327 citations from AGRICOLA, the U.S. Department of Agriculture database, dating from January 1979 through May 1990. The bibliography cites books, print materials, and audiovisual materials on the subject of nutrition education for grades preschool through six. Each citation contains complete bibliographic…

  10. Audiovisual Material as Educational Innovation Strategy to Reduce Anxiety Response in Students of Human Anatomy

    ERIC Educational Resources Information Center

    Casado, Maria Isabel; Castano, Gloria; Arraez-Aybar, Luis Alfonso

    2012-01-01

    This study presents the design, effect and utility of using audiovisual material containing real images of dissected human cadavers as an innovative educational strategy (IES) in the teaching of Human Anatomy. The goal is to familiarize students with the practice of dissection and to transmit the importance and necessity of this discipline, while…

  11. Effects of phonetic context on audio-visual intelligibility of French.

    PubMed

    Benoît, C; Mohamadi, T; Kandel, S

    1994-10-01

    Bimodal perception leads to better speech understanding than auditory perception alone. We evaluated the overall benefit of lip-reading on natural utterances of French produced by a single speaker. Eighteen French subjects with good audition and vision were administered a closed set identification test of VCVCV nonsense words consisting of three vowels [i, a, y] and six consonants [b, v, z, 3, R, l]. Stimuli were presented under both auditory and audio-visual conditions with white noise added at various signal-to-noise ratios. Identification scores were higher in the bimodal condition than in the auditory-alone condition, especially in situations where acoustic information was reduced. The auditory and audio-visual intelligibility of the three vowels [i, a, y] averaged over the six consonantal contexts was evaluated as well. Two different hierarchies of intelligibility were found. Auditorily, [a] was most intelligible, followed by [i] and then by [y]; whereas visually [y] was most intelligible, followed by [a] and [i]. We also quantified the contextual effects of the three vowels on the auditory and audio-visual intelligibility of the consonants. Both the auditory and the audio-visual intelligibility of surrounding consonants was highest in the [a] context, followed by the [i] context and lastly the [y] context.

  12. Sciences: A Select List of U.S. Government Produced Audiovisual Materials - 1978.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    This publication is a catalog that contains the National Audiovisual Center's materials on Science. There are twelve areas in this catalog: Aerospace Technology, Astronomy, Biology, Chemistry, Electronics and Electricity, Energy, Environmental Studies, Geology, Mathematics and Computer Science, Oceanography, Physics, and Weather/Meteorology. Each…

  13. Strategies for Media Literacy: Audiovisual Skills and the Citizenship in Andalusia

    ERIC Educational Resources Information Center

    Aguaded-Gomez, Ignacio; Perez-Rodriguez, M. Amor

    2012-01-01

    Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today's digital society (society-network), where information and communication…

  14. The Education, Audiovisual and Culture Executive Agency: Helping You Grow Your Project

    ERIC Educational Resources Information Center

    Education, Audiovisual and Culture Executive Agency, European Commission, 2011

    2011-01-01

    The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…

  15. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  16. Audiovisual Aids for Astronomy and Space Physics at an Urban College

    ERIC Educational Resources Information Center

    Moche, Dinah L.

    1973-01-01

    Discusses the use of easily available audiovisual aids to teach a one semester course in astronomy and space physics to liberal arts students of both sexes at Queensborough Community College. Included is a list of teaching aids for use in astronomy instruction. (CC)

  17. How To Make Effective Decisions for Buying Audio-Visual Hardware

    ERIC Educational Resources Information Center

    MacGregor, Alex R.

    1973-01-01

    The process of purchasing audiovisual hardware should eventually arrive at a point where the user's requirements and the industries' standards and guidelines correlate, keeping budgets in mind. Teachnical personnel should exchange more information on their decision making preferably through small discussion groups, or users should unite and…

  18. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    PubMed Central

    Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379

  19. Seeing to Hear Better: Evidence for Early Audio-Visual Interactions in Speech Identification

    ERIC Educational Resources Information Center

    Schwartz, Jean-Luc; Berthommier, Frederic; Savariaux, Christophe

    2004-01-01

    Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances "sensitivity" to acoustic information,…

  20. Temporal Interval Discrimination Thresholds Depend on Perceived Synchrony for Audio-Visual Stimulus Pairs

    ERIC Educational Resources Information Center

    van Eijk, Rob L. J.; Kohlrausch, Armin; Juola, James F.; van de Par, Steven

    2009-01-01

    Audio-visual stimulus pairs presented at various relative delays, are commonly judged as being "synchronous" over a range of delays from about -50 ms (audio leading) to +150 ms (video leading). The center of this range is an estimate of the point of subjective simultaneity (PSS). The judgment boundaries, where "synchronous" judgments yield to a…

  1. Challenges of Using Audio-Visual Aids as Warm-Up Activity in Teaching Aviation English

    ERIC Educational Resources Information Center

    Sahin, Mehmet; Sule, St.; Seçer, Y. E.

    2016-01-01

    This study aims to find out the challenges encountered in the use of video as audio-visual material as a warm-up activity in aviation English course at high school level. This study is based on a qualitative study in which focus group interview is used as the data collection procedure. The participants of focus group are four instructors teaching…

  2. Generating Language through Media: Audio-Visual Production by the ESL Student.

    ERIC Educational Resources Information Center

    Levine, Linda New

    This paper describes some teaching techniques developed in the author's middle school and high school ESL classes. The techniques described here use audio-visual devices and student production of media as a motivational tool as well as a method of providing for spontaneous language practice and communicative competence. Some of the techniques and…

  3. Automated Apprenticeship Training (AAT). A Systematized Audio-Visual Approach to Self-Paced Job Training.

    ERIC Educational Resources Information Center

    Pieper, William J.; And Others

    Two Automated Apprenticeship Training (AAT) courses were developed for Air Force Security Police Law Enforcement and Security specialists. The AAT was a systematized audio-visual approach to self-paced job training employing an easily operated teaching device. AAT courses were job specific and based on a behavioral task analysis of the two…

  4. MushyPeek: A Framework for Online Investigation of Audiovisual Dialogue Phenomena

    ERIC Educational Resources Information Center

    Edlund, Jens; Beskow, Jonas

    2009-01-01

    Evaluation of methods and techniques for conversational and multimodal spoken dialogue systems is complex, as is gathering data for the modeling and tuning of such techniques. This article describes MushyPeek, an experiment framework that allows us to manipulate the audiovisual behavior of interlocutors in a setting similar to face-to-face…

  5. Some Audio-Visual Suggestions for a Course in Ancient and Modern Tragedy

    ERIC Educational Resources Information Center

    Scanlon, Richard

    1977-01-01

    A description of the syllabus of a course in ancient and modern tragedy given in English at the University of Illinois. An annotated list of the plays studied, suggested films, recordings and commentaries, and sources for audio-visual materials are included. (AMH)

  6. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    ERIC Educational Resources Information Center

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  7. Audiovisual Education in Primary Schools: A Curriculum Project in the Netherlands.

    ERIC Educational Resources Information Center

    Ketzer, Jan W.

    Audiovisual, or mass media education can play a significant role in children's social, emotional, cognitive, sensory, motor, and creative development. It includes all school activities which teach children to interact with The field includes all school activities which teach children to interact with visualize ideas. Students can be involved in…

  8. Selected Bibliography of Audiovisual Resources Suggested for Library Services to the Mentally Retarded.

    ERIC Educational Resources Information Center

    Harris, Nancy G., Comp.

    Titles and prices for filmstrips with records, filmstrips, films, cassettes, film loops, disc recordings for utilization as audiovisual resources for library services for the mentally retarded are listed. A list of publishers and distributors of suitable educational and library materials is also provided. (AB)

  9. Selected List of Instructional Materials for English as a Second Language: Audio-Visual Aids.

    ERIC Educational Resources Information Center

    Center for Applied Linguistics, Arlington, VA.

    This bibliography lists audiovisual materials used in the teaching of English as a second language. The sections of the bibliography include: (1) Pictures, Charts, and Flash cards, (2) Flannel Aids, (3) Games and Puzzles, (4) Films, (5) Filmstrips and Transparencies, (6) Aural Aids, and (7) Miscellaneous Aids, including a classroom thermometer, a…

  10. Comparisons of Audio and Audiovisual Measures of Stuttering Frequency and Severity in Preschool-Age Children

    ERIC Educational Resources Information Center

    Rousseau, Isabelle; Onslow, Mark; Packman, Ann; Jones, Mark

    2008-01-01

    Purpose: To determine whether measures of stuttering frequency and measures of overall stuttering severity in preschoolers differ when made from audio-only recordings compared with audiovisual recordings. Method: Four blinded speech-language pathologists who had extensive experience with preschoolers who stutter measured stuttering frequency and…

  11. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... advertising. In the case of advertisements for smokeless tobacco on videotapes, casettes, or...

  12. Catalogo de peliculas educativas y otros materiales audiovisuales (Catalogue of Educational Films and other Audiovisual Materials).

    ERIC Educational Resources Information Center

    Encyclopaedia Britannica, Inc., Chicago, IL.

    This catalogue of educational films and other audiovisual materials consists predominantly of films in Spanish and English which are intended for use in elementary and secondary schools. A wide variety of topics including films for social studies, language arts, humanities, physical and natural sciences, safety and health, agriculture, physical…

  13. Teacher's Guide to Aviation Education Resources. Including: Career Information, Audiovisuals, Publications, Periodicals.

    ERIC Educational Resources Information Center

    Federal Aviation Administration (DOT), Washington, DC. Office of Public Affairs.

    Currently available aviation education resource materials are listed alphabetically by title under four headings: (1) career information; (2) audiovisual materials; (3) publications; and (4) periodicals. Each entry includes: title; format (16mm film, slides, slide/tape presentation, VHS/Beta videotape, book, booklet, newsletter, pamphlet, poster,…

  14. Anglo-American Cataloging Rules. Chapter Twelve, Revised. Audiovisual Media and Special Instructional Materials.

    ERIC Educational Resources Information Center

    American Library Association, Chicago, IL.

    Chapter 12 of the Anglo-American Cataloging Rules has been revised to provide rules for works in the principal audiovisual media (motion pictures, filmstrips, videorecordings, slides, and transparencies) as well as instructional aids (charts, dioramas, flash cards, games, kits, microscope slides, models, and realia). The rules for main and added…

  15. Photojournalism: The Basic Course. A Selected, Annotated Bibliography of Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Applegate, Edd

    Designed to help instructors choose appropriate audio-visual materials for the basic course in photojournalism, this bibliography contains 11 annotated entries. Annotations include the name of the materials, running time, whether black-and-white or color, and names of institutions from which the materials can be secured, as well as brief…

  16. Audiovisual Translation and Assistive Technology: Towards a Universal Design Approach for Online Education

    ERIC Educational Resources Information Center

    Patiniotaki, Emmanouela

    2016-01-01

    Audiovisual Translation (AVT) and Assistive Technology (AST) are two fields that share common grounds within accessibility-related research, yet they are rarely studied in combination. The reason most often lies in the fact that they have emerged from different disciplines, i.e. Translation Studies and Computer Science, making a possible combined…

  17. Evaluation of Modular EFL Educational Program (Audio-Visual Materials Translation & Translation of Deeds & Documents)

    ERIC Educational Resources Information Center

    Imani, Sahar Sadat Afshar

    2013-01-01

    Modular EFL Educational Program has managed to offer specialized language education in two specific fields: Audio-visual Materials Translation and Translation of Deeds and Documents. However, no explicit empirical studies can be traced on both internal and external validity measures as well as the extent of compatibility of both courses with the…

  18. Age-Related Differences in Audiovisual Interactions of Semantically Different Stimuli

    ERIC Educational Resources Information Center

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of…

  19. Terminological Control of "Anonymous Groups" for Catalogues of Audiovisual Television Documents

    ERIC Educational Resources Information Center

    Caldera-Serrano, Jorge

    2006-01-01

    This article discusses the exceptional nature of the description of moving images for television archives, deriving from their audiovisual nature, and of the specifications in the queries of journalists as users of the Document Information System. It is suggested that there is a need to control completely "Anonymous Groups"--groups without any…

  20. Annotated Bibliography: Afro-American, Hispano and Amerind; with Amerinc; with Audio-Visual Materials List.

    ERIC Educational Resources Information Center

    Haberbosch, John F.; And Others

    Readings and audiovisual materials, selected especially for educators, related to the study of Afro-American, Hispano-American, and American Indian cultures are included in this 366-item annotated bibliography covering the period from 1861 to 1968. Historical, cultural, and biographical materials are included for each of the three cultures as well…

  1. Federal Audiovisual Policy Act. Hearing before a Subcommittee of the Committee on Government Operations, House of Representatives, Ninety-Eighth Congress, Second Session on H.R. 3325 to Establish in the Office of Management and Budget an Office to Be Known as the Office of Federal Audiovisual Policy, and for Other Purposes.

    ERIC Educational Resources Information Center

    Congress of the U. S., Washington, DC. House Committee on Government Operations.

    The views of private industry and government are offered in this report of a hearing on the Federal Audiovisual Policy Act, which would establish an office to coordinate federal audiovisual activity and require most audiovisual material produced for federal agencies to be acquired under contract from private producers. Testimony is included from…

  2. 36 CFR 1235.42 - What specifications and standards for transfer apply to audiovisual records, cartographic, and...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... elements that are needed for future preservation, duplication, and reference for audiovisual records... captioning information may be maintained in another file such as a database if the file number correlation...

  3. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification.

    PubMed

    Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor

    2014-08-01

    The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

  4. Plasma membrane ordering agent pluronic F-68 (PF-68) reduces neurotransmitter uptake and release and produces learning and memory deficits in rats

    NASA Technical Reports Server (NTRS)

    Clarke, M. S.; Prendergast, M. A.; Terry, A. V. Jr

    1999-01-01

    A substantial body of evidence indicates that aged-related changes in the fluidity and lipid composition of the plasma membrane contribute to cellular dysfunction in humans and other mammalian species. In the CNS, reductions in neuronal plasma membrane order (PMO) (i.e., increased plasma membrane fluidity) have been attributed to age as well as the presence of the beta-amyloid peptide-25-35, known to play an important role in the neuropathology of Alzheimer's disease (AD). These PMO increases may influence neurotransmitter synthesis, receptor binding, and second messenger systems as well as signal transduction pathways. The effects of neuronal PMO on learning and memory processes have not been adequately investigated, however. Based on the hypothesis that an increase in PMO may alter a number of aspects of synaptic transmission, we investigated several neurochemical and behavioral effects of the membrane ordering agent, PF-68. In cell culture, PF-68 (nmoles/mg SDS extractable protein) reduced [3H]norepinephrine (NE) uptake into differentiated PC-12 cells as well as reduced nicotine stimulated [3H]NE release. The compound (800-2400 microg/kg, i.p., resulting in nmoles/mg SDS extractable protein in the brain) decreased step-through latencies and increased the frequencies of crossing into the unsafe side of the chamber in inhibitory avoidance training. In the Morris water maze, PF-68 increased the latencies and swim distances required to locate a hidden platform and reduced the time spent and distance swam in the previous target quadrant during transfer (probe) trials. PF-68 did not impair performance of a well-learned working memory task, the rat delayed stimulus discrimination task (DSDT), however. Studies with 14C-labeled PF-68 indicated that significant (pmoles/mg wet tissue) levels of the compound entered the brain from peripheral (i.p.) injection. No PF-68 related changes were observed in swim speeds or in visual acuity tests in water maze experiments, rotorod

  5. Delayed audiovisual integration of patients with mild cognitive impairment and Alzheimer's disease compared with normal aged controls.

    PubMed

    Wu, Jinglong; Yang, Jiajia; Yu, Yinghua; Li, Qi; Nakamura, Naoya; Shen, Yong; Ohta, Yasuyuki; Yu, Shengyuan; Abe, Koji

    2012-01-01

    The human brain can anatomically combine task-relevant information from different sensory pathways to form a unified perception; this process is called multisensory integration. The aim of the present study was to test whether the multisensory integration abilities of patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD) differed from those of normal aged controls (NC). A total of 64 subjects were divided into three groups: NC individuals (n = 24), MCI patients (n = 19), and probable AD patients (n = 21). All of the subjects were asked to perform three separate audiovisual integration tasks and were instructed to press the response key associated with the auditory, visual, or audiovisual stimuli in the three tasks. The accuracy and response time (RT) of each task were measured, and the RTs were analyzed using cumulative distribution functions to observe the audiovisual integration. Our results suggest that the mean RT of patients with AD was significantly longer than those of patients with MCI and NC individuals. Interestingly, we found that patients with both MCI and AD exhibited adequate audiovisual integration, and a greater peak (time bin with the highest percentage of benefit) and broader temporal window (time duration of benefit) of multisensory enhancement were observed. However, the onset time and peak benefit of audiovisual integration in MCI and AD patients occurred significantly later than did those of the NC. This finding indicates that the cognitive functional deficits of patients with MCI and AD contribute to the differences in performance enhancements of audiovisual integration compared with NC.

  6. Audio-Visual Aid in Teaching "Fatty Liver"

    ERIC Educational Resources Information Center

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-01-01

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…

  7. Psychometric analysis of the audiovisual taxonomy for assessing pain behavior in chronic back-pain patients.

    PubMed

    Kleinke, C L; Spangler, A S

    1988-02-01

    Sixty chronic back-pain patients were administered the audiovisual taxonomy of pain behavior during their first and last weeks in an inpatient multidisciplinary pain clinic. Audiovisual total score provided a useful index of pain behavior with a suitable frequency and reliability, while offering unique variance as a measure of treatment outcome. Patients' pain behaviors upon admission to the pain program were positively correlated with the following background variables: receiving worker's compensation, pounds overweight, and number of back surgeries. Patients' pain behaviors upon completion of the pain program were significantly correlated with their preferences for pain treatment modalities. High levels of pain behavior correlated with a preference for treatments of ice and heat. Low levels of pain behavior correlated with a preference for physical therapy, social work, lectures, and relaxation. It was suggested that treatment outcome in a multidisciplinary pain clinic is more immediately related to patients' coping styles and their choice of pain treatment modalities than to their demographics and personalities.

  8. The Audio-Visual Temporal Binding Window Narrows In Early Childhood

    PubMed Central

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual temporal binding window in 4-, 5-, and 6-year-old children (total N=120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked whether the voice and face went together (Experiment 1) or whether the desynchronized videos differed from the synchronized one (Experiment 2). Four-year-olds detected the 666 ms asynchrony, 5-year-olds detected the 666 and 500 ms asynchrony, and 6-year-olds detected all asynchronies. These results show that the audio-visual temporal binding window narrows slowly during early childhood and that it is still wider at six years of age than in older children and adults. PMID:23888869

  9. A comparison between audio and audiovisual distraction techniques in managing anxious pediatric dental patients.

    PubMed

    Prabhakar, A R; Marwah, N; Raju, O S

    2007-01-01

    Pain is not the sole reason for fear of dentistry. Anxiety or the fear of unknown during dental treatment is a major factor and it has been the major concern for dentists for a long time. Therefore, the main aim of this study was to evaluate and compare the two distraction techniques, viz, audio distraction and audiovisual distraction, in management of anxious pediatric dental patients. Sixty children aged between 4-8 years were divided into three groups. Each child had four dental visits--screening visit, prophylaxis visit, cavity preparation and restoration visit, and extraction visit. Child's anxiety level in each visit was assessed using a combination of four measures: Venham's picture test, Venham's rating of clinical anxiety, pulse rate, and oxygen saturation. The values obtained were tabulated and subjected to statistical analysis. It was concluded that audiovisual distraction technique was more effective in managing anxious pediatric dental patient as compared to audio distraction technique.

  10. Audio-visual multisensory integration in superior parietal lobule revealed by human intracranial recordings.

    PubMed

    Molholm, Sophie; Sehatpour, Pejman; Mehta, Ashesh D; Shpaner, Marina; Gomez-Ramirez, Manuel; Ortigue, Stephanie; Dyke, Jonathan P; Schwartz, Theodore H; Foxe, John J

    2006-08-01

    Intracranial recordings from three human subjects provide the first direct electrophysiological evidence for audio-visual multisensory processing in the human superior parietal lobule (SPL). Auditory and visual sensory inputs project to the same highly localized region of the parietal cortex with auditory inputs arriving considerably earlier (30 ms) than visual inputs (75 ms). Multisensory integration processes in this region were assessed by comparing the response to simultaneous audio-visual stimulation with the algebraic sum of responses to the constituent auditory and visual unisensory stimulus conditions. Significant integration effects were seen with almost identical morphology across the three subjects, beginning between 120 and 160 ms. These results are discussed in the context of the role of SPL in supramodal spatial attention and sensory-motor transformations.

  11. SU-E-J-235: Audiovisual Biofeedback Improves the Correlation Between Internal and External Respiratory Motion

    SciTech Connect

    Lee, D; Pollock, S; Keall, P; Greer, P; Ludbrook, J; Paganelli, C; Kim, T

    2015-06-15

    Purpose: External respiratory surrogates are often used to predict internal lung tumor motion for beam gating but the assumption of correlation between external and internal surrogates is not always verified resulting in amplitude mismatch and time shift. To test the hypothesis that audiovisual (AV) biofeedback improves the correlation between internal and external respiratory motion, in order to improve the accuracy of respiratory-gated treatments for lung cancer radiotherapy. Methods: In nine lung cancer patients, 2D coronal and sagittal cine-MR images were acquired across two MRI sessions (pre- and mid-treatment) with (1) free breathing (FB) and (2) AV biofeedback. External anterior-posterior (AP) respiratory motions of (a) chest and (b) abdomen were simultaneously acquired with physiological measurement unit (PMU, 3T Skyra, Siemens Healthcare Erlangen, Germany) and real-time position management (RPM) system (Varian, Palo Alto, USA), respectively. Internal superior-inferior (SI) respiratory motions of (c) lung tumor (i.e. centroid of auto-segmented lung tumor) and (d) diaphragm (i.e. upper liver dome) were measured from individual cine-MR images across 32 dataset. The four respiratory motions were then synchronized with the cine-MR image acquisition time. Correlation coefficients were calculated in the time variation of two nominated respiratory motions: (1) chest-abdomen, (2) abdomen-diaphragm and (3) diaphragm-lung tumor. The three combinations were compared between FB and AV biofeedback. Results: Compared to FB, AV biofeedback improved chest-abdomen correlation by 17% (p=0.005) from 0.75±0.23 to 0.90±0.05 and abdomen-diaphragm correlation by 4% (p=0.058) from 0.91±0.11 to 0.95±0.05. Compared to FB, AV biofeedback improved diaphragm-lung tumor correlation by 12% (p=0.023) from 0.65±0.21 to 0.74±0.16. Conclusions: Our results demonstrated that AV biofeedback significantly improved the correlation of internal and external respiratory motion, thus

  12. Top-down attention regulates the neural expression of audiovisual integration.

    PubMed

    Morís Fernández, Luis; Visser, Maya; Ventura-Campos, Noelia; Ávila, César; Soto-Faraco, Salvador

    2015-10-01

    The interplay between attention and multisensory integration has proven to be a difficult question to tackle. There are almost as many studies showing that multisensory integration occurs independently from the focus of attention as studies implying that attention has a profound effect on integration. Addressing the neural expression of multisensory integration for attended vs. unattended stimuli can help disentangle this apparent contradiction. In the present study, we examine if selective attention to sound pitch influences the expression of audiovisual integration in both behavior and neural activity. Participants were asked to attend to one of two auditory speech streams while watching a pair of talking lips that could be congruent or incongruent with the attended speech stream. We measured behavioral and neural responses (fMRI) to multisensory stimuli under attended and unattended conditions while physical stimulation was kept constant. Our results indicate that participants recognized words more accurately from an auditory stream that was both attended and audiovisually (AV) congruent, thus reflecting a benefit due to AV integration. On the other hand, no enhancement was found for AV congruency when it was unattended. Furthermore, the fMRI results indicated that activity in the superior temporal sulcus (an area known to be related to multisensory integration) was contingent on attention as well as on audiovisual congruency. This attentional modulation extended beyond heteromodal areas to affect processing in areas classically recognized as unisensory, such as the superior temporal gyrus or the extrastriate cortex, and to non-sensory areas such as the motor cortex. Interestingly, attention to audiovisual incongruence triggered responses in brain areas related to conflict processing (i.e., the anterior cingulate cortex and the anterior insula). Based on these results, we hypothesize that AV speech integration can take place automatically only when both

  13. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults

    PubMed Central

    Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when

  14. Order Effects of Learning with Modeling and Simulation Software on Field-Dependent and Field-Independent Children's Cognitive Performance: An Interaction Effect

    ERIC Educational Resources Information Center

    Angeli, Charoula; Valanides, Nicos; Polemitou, Eirini; Fraggoulidou, Elena

    2014-01-01

    The study examined the interaction between field dependence-independence (FD/I) and learning with modeling software and simulations, and their effect on children's performance. Participants were randomly assigned into two groups. Group A first learned with a modeling tool and then with simulations. Group B learned first with simulations and then…

  15. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    PubMed

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  16. Audiovisual integration in near and far space: effects of changes in distance and stimulus effectiveness.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W; Van der Smagt, M J

    2016-05-01

    A factor that is often not considered in multisensory research is the distance from which information is presented. Interestingly, various studies have shown that the distance at which information is presented can modulate the strength of multisensory interactions. In addition, our everyday multisensory experience in near and far space is rather asymmetrical in terms of retinal image size and stimulus intensity. This asymmetry is the result of the relation between the stimulus-observer distance and its retinal image size and intensity: an object that is further away is generally smaller on the retina as compared to the same object when it is presented nearer. Similarly, auditory intensity decreases as the distance from the observer increases. We investigated how each of these factors alone, and their combination, affected audiovisual integration. Unimodal and bimodal stimuli were presented in near and far space, with and without controlling for distance-dependent changes in retinal image size and intensity. Audiovisual integration was enhanced for stimuli that were presented in far space as compared to near space, but only when the stimuli were not corrected for visual angle and intensity. The same decrease in intensity and retinal size in near space did not enhance audiovisual integration, indicating that these results cannot be explained by changes in stimulus efficacy or an increase in distance alone, but rather by an interaction between these factors. The results are discussed in the context of multisensory experience and spatial uncertainty, and underline the importance of studying multisensory integration in the depth space.

  17. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception

    PubMed Central

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-01-01

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception. PMID:27734953

  18. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  19. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech

    PubMed Central

    Alcalá-Quintana, Rocío

    2015-01-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal. PMID:27551361

  20. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    PubMed

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  1. Commissioning and quality assurance for a respiratory training system based on audiovisual biofeedback.

    PubMed

    Cui, Guoqiang; Gopalan, Siddharth; Yamamoto, Tokihiro; Berger, Jonathan; Maxim, Peter G; Keall, Paul J

    2010-07-12

    A respiratory training system based on audiovisual biofeedback has been implemented at our institution. It is intended to improve patients' respiratory regularity during four-dimensional (4D) computed tomography (CT) image acquisition. The purpose is to help eliminate the artifacts in 4D-CT images caused by irregular breathing, as well as improve delivery efficiency during treatment, where respiratory irregularity is a concern. This article describes the commissioning and quality assurance (QA) procedures developed for this peripheral respiratory training system, the Stanford Respiratory Training (START) system. Using the Varian real-time position management system for the respiratory signal input, the START software was commissioned and able to acquire sample respiratory traces, create a patient-specific guiding waveform, and generate audiovisual signals for improving respiratory regularity. Routine QA tests that include hardware maintenance, visual guiding-waveform creation, auditory sounds synchronization, and feedback assessment, have been developed for the START system. The QA procedures developed here for the START system could be easily adapted to other respiratory training systems based on audiovisual biofeedback.

  2. Visual and audiovisual effects of isochronous timing on visual perception and brain activity.

    PubMed

    Marchant, Jennifer L; Driver, Jon

    2013-06-01

    Understanding how the brain extracts and combines temporal structure (rhythm) information from events presented to different senses remains unresolved. Many neuroimaging beat perception studies have focused on the auditory domain and show the presence of a highly regular beat (isochrony) in "auditory" stimulus streams enhances neural responses in a distributed brain network and affects perceptual performance. Here, we acquired functional magnetic resonance imaging (fMRI) measurements of brain activity while healthy human participants performed a visual task on isochronous versus randomly timed "visual" streams, with or without concurrent task-irrelevant sounds. We found that visual detection of higher intensity oddball targets was better for isochronous than randomly timed streams, extending previous auditory findings to vision. The impact of isochrony on visual target sensitivity correlated positively with fMRI signal changes not only in visual cortex but also in auditory sensory cortex during audiovisual presentations. Visual isochrony activated a similar timing-related brain network to that previously found primarily in auditory beat perception work. Finally, activity in multisensory left posterior superior temporal sulcus increased specifically during concurrent isochronous audiovisual presentations. These results indicate that regular isochronous timing can modulate visual processing and this can also involve multisensory audiovisual brain mechanisms.

  3. Neural oscillations in the temporal pole for a temporally congruent audio-visual speech detection task

    PubMed Central

    Ohki, Takefumi; Gunji, Atsuko; Takei, Yuichi; Takahashi, Hidetoshi; Kaneko, Yuu; Kita, Yosuke; Hironaga, Naruhito; Tobimatsu, Shozo; Kamio, Yoko; Hanakawa, Takashi; Inagaki, Masumi; Hiraki, Kazuo

    2016-01-01

    Though recent studies have elucidated the earliest mechanisms of processing in multisensory integration, our understanding of how multisensory integration of more sustained and complicated stimuli is implemented in higher-level association cortices is lacking. In this study, we used magnetoencephalography (MEG) to determine how neural oscillations alter local and global connectivity during multisensory integration processing. We acquired MEG data from 15 healthy volunteers performing an audio-visual speech matching task. We selected regions of interest (ROIs) using whole brain time-frequency analyses (power spectrum density and wavelet transform), then applied phase amplitude coupling (PAC) and imaginary coherence measurements to them. We identified prominent delta band power in the temporal pole (TP), and a remarkable PAC between delta band phase and beta band amplitude. Furthermore, imaginary coherence analysis demonstrated that the temporal pole and well-known multisensory areas (e.g., posterior parietal cortex and post-central areas) are coordinated through delta-phase coherence. Thus, our results suggest that modulation of connectivity within the local network, and of that between the local and global network, is important for audio-visual speech integration. In short, these neural oscillatory mechanisms within and between higher-level association cortices provide new insights into the brain mechanism underlying audio-visual integration. PMID:27897244

  4. Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models

    PubMed Central

    2016-01-01

    Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919

  5. Identifying Core Affect in Individuals from fMRI Responses to Dynamic Naturalistic Audiovisual Stimuli.

    PubMed

    Kim, Jongwan; Wang, Jing; Wedell, Douglas H; Shinkareva, Svetlana V

    2016-01-01

    Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli.

  6. Brain mechanisms that underlie the effects of motivational audiovisual stimuli on psychophysiological responses during exercise.

    PubMed

    Bigliassi, Marcelo; Silva, Vinícius B; Karageorghis, Costas I; Bird, Jonathan M; Santos, Priscila C; Altimari, Leandro R

    2016-05-01

    Motivational audiovisual stimuli such as music and video have been widely used in the realm of exercise and sport as a means by which to increase situational motivation and enhance performance. The present study addressed the mechanisms that underlie the effects of motivational stimuli on psychophysiological responses and exercise performance. Twenty-two participants completed fatiguing isometric handgrip-squeezing tasks under two experimental conditions (motivational audiovisual condition and neutral audiovisual condition) and a control condition. Electrical activity in the brain and working muscles was analyzed by use of electroencephalography and electromyography, respectively. Participants were asked to squeeze the dynamometer maximally for 30s. A single-item motivation scale was administered after each squeeze. Results indicated that task performance and situational motivational were superior under the influence of motivational stimuli when compared to the other two conditions (~20% and ~25%, respectively). The motivational stimulus downregulated the predominance of low-frequency waves (theta) in the right frontal regions of the cortex (F8), and upregulated high-frequency waves (beta) in the central areas (C3 and C4). It is suggested that motivational sensory cues serve to readjust electrical activity in the brain; a mechanism by which the detrimental effects of fatigue on the efferent control of working muscles is ameliorated.

  7. Identifying Core Affect in Individuals from fMRI Responses to Dynamic Naturalistic Audiovisual Stimuli

    PubMed Central

    Kim, Jongwan; Wang, Jing; Wedell, Douglas H.

    2016-01-01

    Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli. PMID:27598534

  8. Time-of-day and attentional-order influences on dichotic processing of digits in learning disabled and normal achieving children.

    PubMed

    Morton, L L; Kershner, J R

    1993-01-01

    A heterogeneous group of 26 learning disabled (LD) and 30 normal achieving (NA) children, responded to a dichotic listening task using digits in morning and afternoon settings. Attentional order (i.e., right ear first versus left ear first) interacted with (1) Time-of-Day and (2) Group and Ear Attended. The first interaction revealed, as predicted, higher morning performance for subjects directed to attend right first. Subjects directed left first showed higher afternoon performance. These results are consistent with enhanced left hemisphere involvement after left hemisphere priming in the morning, and after right hemisphere priming in the afternoon. The second interaction indicated the LD had more difficulty than controls switching attention to the right ear when instructed to attend left first. The LD may activate the right hemisphere (via left hemispace attending) and have difficulty with subsequent right hemisphere inhibition, or left hemisphere activation, when shifting to right ear attending. Nonparametric tests revealed a greater incidence of lateralized responders in the morning for normal achievers attending left first. Findings are seen to augment previous research.

  9. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    NASA Astrophysics Data System (ADS)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution

  10. Benefits for Voice Learning Caused by Concurrent Faces Develop over Time.

    PubMed

    Zäske, Romi; Mühl, Constanze; Schweinberger, Stefan R

    2015-01-01

    Recognition of personally familiar voices benefits from the concurrent presentation of the corresponding speakers' faces. This effect of audiovisual integration is most pronounced for voices combined with dynamic articulating faces. However, it is unclear if learning unfamiliar voices also benefits from audiovisual face-voice integration or, alternatively, is hampered by attentional capture of faces, i.e., "face-overshadowing". In six study-test cycles we compared the recognition of newly-learned voices following unimodal voice learning vs. bimodal face-voice learning with either static (Exp. 1) or dynamic articulating faces (Exp. 2). Voice recognition accuracies significantly increased for bimodal learning across study-test cycles while remaining stable for unimodal learning, as reflected in numerical costs of bimodal relative to unimodal voice learning in the first two study-test cycles and benefits in the last two cycles. This was independent of whether faces were static images (Exp. 1) or dynamic videos (Exp. 2). In both experiments, slower reaction times to voices previously studied with faces compared to voices only may result from visual search for faces during memory retrieval. A general decrease of reaction times across study-test cycles suggests facilitated recognition with more speaker repetitions. Overall, our data suggest two simultaneous and opposing mechanisms during bimodal face-voice learning: while attentional capture of faces may initially impede voice learning, audiovisual integration may facilitate it thereafter.

  11. Using Videos and Multimodal Discourse Analysis to Study How Students Learn a Trade

    ERIC Educational Resources Information Center

    Chan, Selena

    2013-01-01

    The use of video to assist with ethnographical-based research is not a new phenomenon. Recent advances in technology have reduced the costs and technical expertise required to use videos for gathering research data. Audio-visual records of learning activities as they take place, allow for many non-vocal and inter-personal communication…

  12. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    ERIC Educational Resources Information Center

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  13. Multimedia Integration for Language e-Learning: Content, Context and the e-Dossier

    ERIC Educational Resources Information Center

    Sanchez-Villalon, Pedro Pablo; Ortega, Manuel; Sanchez-Villalon, Asuncion

    2010-01-01

    In the education world, it is widely accepted that language learning is one of the pioneering disciplines in the application and use of the information and communication technologies, initially preceded by the widespread use of audiovisual resources which, finally integrated in the digital space, bring about the use of multimedia. Additionally,…

  14. The Development of a Learning Materials Selection Policy for Austin Community College.

    ERIC Educational Resources Information Center

    Lamar, Christine

    This paper chronicles the development of a materials selection policy for the library and audio/visual software components of Austin Community College's (ACC) Learning Resource System. The policy would establish authority for selection decisions, the intellectual framework within which decisions are made, and selection criteria and guidelines. It…

  15. German as a Second Language: Annotated Bibliography of Learning Resources, Grades 1-12.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton. Language Services Branch.

    The annotated bibliography of print and non-print materials for students and teachers of German includes standard student texts, audiovisual materials, student and teacher references, and other media. It is intended to guide teachers in the selection of student and instructional materials for the teaching and learning of German at the elementary…

  16. Input Skewedness, Consistency, and Order of Frequent Verbs in Frequency-Driven Second Language Construction Learning: A Replication and Extension of Casenhiser and Goldberg (2005) to Adult Second Language Acquisition

    ERIC Educational Resources Information Center

    Nakamura, Daisuke

    2012-01-01

    Recent usage-based models of language acquisition research has found that three frequency manipulations; (1) skewed input (Casenhiser & Goldberg 2005), (2) input consistency (Childers & Tomasello 2001), and (3) order of frequent verbs (Goldberg, Casenhiser, & White 2007) facilitated construction learning in children. The present paper addresses…

  17. Processing of Audiovisually Congruent and Incongruent Speech in School-Age Children with a History of Specific Language Impairment: A Behavioral and Event-Related Potentials Study

    ERIC Educational Resources Information Center

    Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana

    2015-01-01

    Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we…

  18. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    ERIC Educational Resources Information Center

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  19. A Study of Teacher Prepared Classroom Audio and Visual Materials and Courses Available to South Bend Area Teachers in Audiovisual Production.

    ERIC Educational Resources Information Center

    Sayles, Ellen L.

    A study was made to find out the average amount of time that teachers in South Bend, Indiana spent designing audiovisual aids and to determine their awareness of the availability of audiovisual production classes. A questionnaire was sent to 30% of the teachers of grades 1-6 asking the amount of time they normally spent producing audiovisual…

  20. Problem-Based Learning Associated by Action-Process-Object-Schema (APOS) Theory to Enhance Students' High Order Mathematical Thinking Ability

    ERIC Educational Resources Information Center

    Mudrikah, Achmad

    2016-01-01

    The research has shown a model of learning activities that can be used to stimulate reflective abstraction in students. Reflective abstraction as a method of constructing knowledge in the Action-Process-Object-Schema theory, and is expected to occur when students are in learning activities, will be able to encourage students to make the process of…

  1. What Makes the Difference? Teachers Explore What Must Be Taught and What Must Be Learned in Order to Understand the Particulate Character of Matter

    ERIC Educational Resources Information Center

    Vikström, Anna

    2014-01-01

    The concept of matter, especially its particulate nature, is acknowledged as being one of the key concept areas in learning science. Within the framework of learning studies and variation theory, and with results from science education research as a starting point, six lower secondary school science teachers tried to enhance students'…

  2. Searching for the Hebb Effect in down Syndrome: Evidence for a Dissociation between Verbal Short-Term Memory and Domain-General Learning of Serial Order

    ERIC Educational Resources Information Center

    Mosse, E. K.; Jarrold, C.

    2010-01-01

    Background: The Hebb effect is a form of repetition-driven long-term learning that is thought to provide an analogue for the processes involved in new word learning. Other evidence suggests that verbal short-term memory also constrains now vocabulary acquisition, but if the Hebb effect is independent of short-term memory, then it may be possible…

  3. Audiovisual Resources in Formal and Informal Learning: Spanish and Mexican Students' Attitudes

    ERIC Educational Resources Information Center

    Fombona, Javier; Pascual, Maria Angeles

    2013-01-01

    This research analyses the evolution in the effectiveness of media messages and aims to optimize the use of ICTs in educational settings. The cultural impact of television and multimedia resources is increasing as they move to the Internet with ever greater quality. The integration of visual narrative techniques with multimedia playback…

  4. Learning to match auditory and visual speech cues: social influences on acquisition of phonological categories.

    PubMed

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development.

  5. The Impact of Politics 2.0 in the Spanish Social Media: Tracking the Conversations around the Audiovisual Political Wars

    NASA Astrophysics Data System (ADS)

    Noguera, José M.; Correyero, Beatriz

    After the consolidation of weblogs as interactive narratives and producers, audiovisual formats are gaining ground on the Web. Videos are spreading all over the Internet and establishing themselves as a new medium for political propaganda inside social media with tools so powerful like YouTube. This investigation proceeds in two stages: on one hand we are going to examine how this audiovisual formats have enjoyed an enormous amount of attention in blogs during the Spanish pre-electoral campaign for the elections of March 2008. On the other hand, this article tries to investigate the social impact of this phenomenon using data from a content analysis of the blog discussion related to these videos centered on the most popular Spanish political blogs. Also, we study when the audiovisual political messages (made by politicians or by users) "born" and "die" in the Web and with what kind of rules they do.

  6. Asynchrony from synchrony: long-range gamma-band neural synchrony accompanies perception of audiovisual speech asynchrony.

    PubMed

    Doesburg, Sam M; Emberson, Lauren L; Rahi, Alan; Cameron, David; Ward, Lawrence M

    2008-02-01

    Real-world speech perception relies on both auditory and visual information that fall within the tolerated range of temporal coherence. Subjects were presented with audiovisual recordings of speech that were offset by either 30 or 300 ms, leading to perceptually coherent or incoherent audiovisual speech, respectively. We provide electroencephalographic evidence of a phase-synchronous gamma-oscillatory network that is transiently activated by the perception of audiovisual speech asynchrony, showing both topological and time-course correspondence to networks reported in previous neuroimaging research. This finding addresses a major theoretical hurdle regarding the mechanism by which distributed networks serving a common function achieve transient functional integration. Moreover, this evidence illustrates an important dissociation between phase-synchronization and stimulus coherence, highlighting the functional nature of network-based synchronization.

  7. Imagery May Arise from Associations Formed through Sensory Experience: A Network of Spiking Neurons Controlling a Robot Learns Visual Sequences in Order to Perform a Mental Rotation Task

    PubMed Central

    McKinstry, Jeffrey L.; Fleischer, Jason G.; Chen, Yanqing; Gall, W. Einar; Edelman, Gerald M.

    2016-01-01

    Mental imagery occurs “when a representation of the type created during the initial phases of perception is present but the stimulus is not actually being perceived.” How does the capability to perform mental imagery arise? Extending the idea that imagery arises from learned associations, we propose that mental rotation, a specific form of imagery, could arise through the mechanism of sequence learning–that is, by learning to regenerate the sequence of mental images perceived while passively observing a rotating object. To demonstrate the feasibility of this proposal, we constructed a simulated nervous system and embedded it within a behaving humanoid robot. By observing a rotating object, the system learns the sequence of neural activity patterns generated by the visual system in response to the object. After learning, it can internally regenerate a similar sequence of neural activations upon briefly viewing the static object. This system learns to perform a mental rotation task in which the subject must determine whether two objects are identical despite differences in orientation. As with human subjects, the time taken to respond is proportional to the angular difference between the two stimuli. Moreover, as reported in humans, the system fills in intermediate angles during the task, and this putative mental rotation activates the same pathways that are activated when the system views physical rotation. This work supports the proposal that mental rotation arises through sequence learning and the idea that mental imagery aids perception through learned associations, and suggests testable predictions for biological experiments. PMID:27653977

  8. Audio-visual aid in teaching "fatty liver".

    PubMed

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-05-06

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various concepts of the topic, while keeping in view Mayer's and Ellaway guidelines for multimedia presentation. A pre-post test study on subject knowledge was conducted for 100 students with the video shown as intervention. A retrospective pre study was conducted as a survey which inquired about students understanding of the key concepts of the topic and a feedback on our video was taken. Students performed significantly better in the post test (mean score 8.52 vs. 5.45 in pre-test), positively responded in the retrospective pre-test and gave a positive feedback for our video presentation. Well-designed multimedia tools can aid in cognitive processing and enhance working memory capacity as shown in our study. In times when "smart" device penetration is high, information and communication tools in medical education, which can act as essential aid and not as replacement for traditional curriculums, can be beneficial to the students. © 2015 by The International Union of Biochemistry and Molecular Biology, 44:241-245, 2016.

  9. A Evaluation of a Learning Cycle Intervention Method in Introductory Physical Science Laboratories in Order to Promote Formal Operational thought Process.

    NASA Astrophysics Data System (ADS)

    Shadburn, Randy Glen

    Jean Piaget describes the formal level of reasoning as the most complex. The dissertation examines the Learning Cycle Intervention effectiveness in transferring students from the concrete to the formal level of reasoning required in most science courses. Four major hypotheses were developed to guide the study. The study consisted of 67 physical science students at a two-year community college divided into a control and experimental group. Data were collected in a pretest-posttest format using four different data gathering instruments. Data were then analyzed with t-tests on those four hypotheses. Findings and conclusions of this study were: (1) the learning cycle did not cause a significant difference between groups on the improvement of formal reasoning ability at the established level of significance (alpha =.05), however, there was a difference that was worthy to note; (2) there was a significant difference between groups on the amount of physics content learned with the experimental group achieving better; (3) there was no significant difference between groups in their attitude toward science; and (4) there was a significant difference between groups in their attitude and value of their laboratory experience. The learning cycle showed promise in promoting the transition to the formal level of reasoning. However, the formal reasoning level is difficult to measure and may be a reason for further study. Overall, the students in the experimental group had a better attitude toward the laboratory experience, achieving better on physics content learned. This was attributed to the learning cycle since all other variables were controlled by learning in the classroom. Recommendations include the need for studies of prolonged length to investigate the effects of the learning cycle, particularly on formal reasoning abilities. This study should be replicated using a different subject area to examine the effectiveness of the learning cycle on other disciplines. In addition

  10. Dynamic, rhythmic facial expressions and the superior temporal sulcus of macaque monkeys: implications for the evolution of audiovisual speech.

    PubMed

    Ghazanfar, Asif A; Chandrasekaran, Chandramouli; Morrill, Ryan J

    2010-05-01

    Audiovisual speech has a stereotypical rhythm that is between 2 and 7 Hz, and deviations from this frequency range in either modality reduce intelligibility. Understanding how audiovisual speech evolved requires investigating the origins of this rhythmic structure. One hypothesis is that the rhythm of speech evolved through the modification of some pre-existing cyclical jaw movements in a primate ancestor. We tested this hypothesis by investigating the temporal structure of lipsmacks and teeth-grinds of macaque monkeys and the neural responses to these facial gestures in the superior temporal sulcus (STS), a region implicated in the processing of audiovisual communication signals in both humans and monkeys. We found that both lipsmacks and teeth-grinds have consistent but distinct peak frequencies and that both fall well within the 2-7 Hz range of mouth movements associated with audiovisual speech. Single neurons and local field potentials of the STS of monkeys readily responded to such facial rhythms, but also responded just as robustly to yawns, a nonrhythmic but dynamic facial expression. All expressions elicited enhanced power in the delta (0-3Hz), theta (3-8Hz), alpha (8-14Hz) and gamma (> 60 Hz) frequency ranges, and suppressed power in the beta (20-40Hz) range. Thus, STS is sensitive to, but not selective for, rhythmic facial gestures. Taken together, these data provide support for the idea that that audiovisual speech evolved (at least in part) from the rhythmic facial gestures of an ancestral primate and that the STS was sensitive to and thus 'prepared' for the advent of rhythmic audiovisual communication.

  11. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  12. Dynamic, rhythmic facial expressions and the superior temporal sulcus of macaque monkeys: implications for the evolution of audiovisual speech

    PubMed Central

    Ghazanfar, Asif A.; Chandrasekaran, Chandramouli; Morrill, Ryan J.

    2010-01-01

    Audiovisual speech has a stereotypical rhythm that is between 2 and 7 Hz, and deviations from this frequency range in either modality reduce intelligibility. Understanding how audiovisual speech evolved requires investigating the origins of this rhythmic structure. One hypothesis is that the rhythm of speech evolved through the modification of some pre-existing cyclical jaw movements in a primate ancestor. We tested this hypothesis by investigating the temporal structure of lipsmacks and teeth-grinds of macaque monkeys and the neural responses to these facial gestures in the superior temporal sulcus (STS), a region implicated in the processing of audiovisual communication signals in both humans and monkeys. We found that both lipsmacks and teeth-grinds have consistent but distinct peak frequencies and that both fall well within the 2–7 Hz range of mouth movements associated with audiovisual speech. Single neurons and local field potentials of the STS of monkeys readily responded to such facial rhythms, but also responded just as robustly to yawns, a nonrhythmic but dynamic facial expression. All expressions elicited enhanced power in the delta (0–3Hz), theta (3–8Hz), alpha (8–14Hz) and gamma (> 60 Hz) frequency ranges, and suppressed power in the beta (20–40Hz) range. Thus, STS is sensitive to, but not selective for, rhythmic facial gestures. Taken together, these data provide support for the idea that that audiovisual speech evolved (at least in part) from the rhythmic facial gestures of an ancestral primate and that the STS was sensitive to and thus ‘prepared’ for the advent of rhythmic audiovisual communication. PMID:20584185

  13. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    PubMed

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  14. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli.

    PubMed

    Talsma, Durk; Senkowski, Daniel; Woldorff, Marty G

    2009-09-01

    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125-75 ms, by 75-25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus

  15. SU-E-J-29: Audiovisual Biofeedback Improves Tumor Motion Consistency for Lung Cancer Patients

    SciTech Connect

    Lee, D; Pollock, S; Makhija, K; Keall, P; Greer, P; Arm, J; Hunter, P; Kim, T

    2014-06-01

    Purpose: To investigate whether the breathing-guidance system: audiovisual (AV) biofeedback improves tumor motion consistency for lung cancer patients. This will minimize respiratory-induced tumor motion variations across cancer imaging and radiotherapy procedues. This is the first study to investigate the impact of respiratory guidance on tumor motion. Methods: Tumor motion consistency was investigated with five lung cancer patients (age: 55 to 64), who underwent a training session to get familiarized with AV biofeedback, followed by two MRI sessions across different dates (pre and mid treatment). During the training session in a CT room, two patient specific breathing patterns were obtained before (Breathing-Pattern-1) and after (Breathing-Pattern-2) training with AV biofeedback. In each MRI session, four MRI scans were performed to obtain 2D coronal and sagittal image datasets in free breathing (FB), and with AV biofeedback utilizing Breathing-Pattern-2. Image pixel values of 2D images after the normalization of 2D images per dataset and Gaussian filter per image were used to extract tumor motion using image pixel values. The tumor motion consistency of the superior-inferior (SI) direction was evaluated in terms of an average tumor motion range and period. Results: Audiovisual biofeedback improved tumor motion consistency by 60% (p value = 0.019) from 1.0±0.6 mm (FB) to 0.4±0.4 mm (AV) in SI motion range, and by 86% (p value < 0.001) from 0.7±0.6 s (FB) to 0.1±0.2 s (AV) in period. Conclusion: This study demonstrated that audiovisual biofeedback improves both breathing pattern and tumor motion consistency for lung cancer patients. These results suggest that AV biofeedback has the potential for facilitating reproducible tumor motion towards achieving more accurate medical imaging and radiation therapy procedures.

  16. [Audiovisual stimulation in children with severely limited motor function: does it improve their quality of life?].

    PubMed

    Barja, Salesa; Muñoz, Carolina; Cancino, Natalia; Núñez, Alicia; Ubilla, Mario; Sylleros, Rodrigo; Riveros, Rodrigo; Rosas, Ricardo

    2013-08-01

    Introduccion. Los niños con enfermedades neurologicas que condicionan una limitacion grave de la movilidad tienen una calidad de vida (CV) deficiente. Objetivo. Estudiar si la CV de dichos pacientes mejora con la aplicacion de un programa de estimulacion audiovisual. Pacientes y metodos. Estudio prospectivo en nueve niños, seis de ellos varones (edad media: 42,6 ± 28,6 meses), con limitacion grave de la movilidad, hospitalizados de manera prolongada. Se elaboraron dos programas de estimulo audiovisual que, junto con videos, se aplicaron mediante una estructura especialmente diseñada. La frecuencia fue de dos veces al dia, por 10 minutos, durante 20 dias. Los primeros diez dias se llevo a cabo de manera pasiva y los segundos diez con guia del observador. Se registraron variables biologicas, conductuales, cognitivas y se aplico una encuesta de CV adaptada. Resultados. Se diagnosticaron tres casos de atrofia muscular espinal, dos de distrofia muscular congenita, dos de miopatia y dos con otros diagnosticos. Ocho pacientes completaron el seguimiento. Desde el punto de vista basal, presentaron CV regular (7,2 ± 1,7 puntos; mediana: 7,0; rango: 6-10), que mejoraba a buena al finalizar (9,4 ± 1,2 puntos; mediana: 9,0; rango: 8-11), con diferencia intraindividual de 2,1 ± 1,6 (mediana: 2,5; rango: –1 a 4; IC 95% = 0,83-3,42; p = 0,006). Se detecto mejoria en cognicion y percepcion favorable de los cuidadores. No hubo cambio en las variables biologicas ni conductuales. Conclusion. Mediante la estimulacion audiovisual es posible mejorar la calidad de vida de niños con limitacion grave de la movilidad.

  17. Development of Sensitivity to Audiovisual Temporal Asynchrony during Mid-Childhood

    PubMed Central

    Kaganovich, Natalya

    2015-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7-8-year-olds, 10-11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether non-verbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2 kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs) - 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition) while in another half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of RT at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10-11-year-olds outperforming 7-8-year-olds at the 300-500 ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function, such as autism, specific language impairment, and dyslexia may be compared. PMID:26569563

  18. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    PubMed Central

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  19. [Accommodation effects of the audiovisual stimulation in the patients experiencing eyestrain with the concomitant disturbances of psychological adaptation].

    PubMed

    Shakula, A V; Emel'ianov, G A

    2014-01-01

    The present study was designed to evaluate the effectiveness of audiovisual stimulation on the state of the eye accommodation system in the patients experiencing eyes train with the concomitant disturbances of psychological. It was shown that a course of audiovisual stimulation (seeing a psychorelaxing film accompanied by a proper music) results in positive (5.9-21.9%) dynamics of the objective accommodation parameters and of the subjective status (4.5-33.2%). Taken together, these findings whole allow this method to be regarded as "relaxing preparation" in the integral complex of the measures for the preservation of the professional vision in this group of the patients.

  20. The use of the first order system transfer function in the analysis of proboscis extension learning of honey bees, Apis mellifera L., exposed to pesticides.

    PubMed

    Abramson, Charles I; Stepanov, Igor I

    2012-04-01

    No attempts have been made to apply a mathematical model to the learning curve in honey bees exposed to pesticides. We applied a standard transfer function in the form Y = B3*exp(- B2 * (X - 1)) + B4 * (1 - exp(- B2 * (X - 1))), where X is the trial number; Y is proportion of correct responses, B2 is the learning rate, B3 is readiness to learn and B4 is ability to learn. Reanalyzing previously published data on the effect of insect growth regulators tebufenozide and diflubenzuron on the classical conditioning of proboscis extension, the model revealed additional effects not detected with standard statistical tests of significance.