Science.gov

Sample records for order audiovisual learning

  1. Diminished sensitivity of audiovisual temporal order in autism spectrum disorder.

    PubMed

    de Boer-Schellekens, Liselotte; Eussen, Mart; Vroomen, Jean

    2013-01-01

    We examined sensitivity of audiovisual temporal order in adolescents with autism spectrum disorder (ASD) using an audiovisual temporal order judgment (TOJ) task. In order to assess domain-specific impairments, the stimuli varied in social complexity from simple flash/beeps to videos of a handclap or a speaking face. Compared to typically-developing controls, individuals with ASD were generally less sensitive in judgments of audiovisual temporal order (larger just noticeable differences, JNDs), but there was no specific impairment with social stimuli. This suggests that people with ASD suffer from a more general impairment in audiovisual temporal processing.

  2. Vicarious Audiovisual Learning in Perfusion Education

    PubMed Central

    Rath, Thomas E.; Holt, David W.

    2010-01-01

    Abstract: Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video. These modules described the setup and operation of the MAQUET ROTAFLOW standalone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today’s perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p < .05). The same was true for test #2 where video learners (n = 10) had an average score of 77% while text learners (n = 9) scored 60% (p < .05). Survey results indicated video learners were more satisfied with their learning module than text learners. Vicarious audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important

  3. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  4. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  5. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  6. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    PubMed

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  7. The Role of Audiovisual Mass Media News in Language Learning

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  8. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    ERIC Educational Resources Information Center

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  9. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. PMID:27131076

  10. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions.

  11. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    ERIC Educational Resources Information Center

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  12. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    ERIC Educational Resources Information Center

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  13. Audiovisuals and Non-Print Learning Resources in a Health Sciences Library.

    ERIC Educational Resources Information Center

    Robinow, Beatrix H.

    1979-01-01

    Describes the assembly and delivery of the self-instructional learning resources of the premedical undergraduate program at McMaster University, placing emphasis on the practical library problems of cataloging, shelving, maintenance, and distribution of such audiovisual materials as pathology specimens, films, videotape, and slide/tape sets.…

  14. Audio-Visual Technology and Learning. The Educational Technology Reviews Series. Number Six.

    ERIC Educational Resources Information Center

    Educational Technology Publications, Englewood Cliffs, NJ.

    Composed of articles selected from recent issues of "Educational Technology" magazine, this volume in the review series focuses on audiovisual technology and learning. Articles in this volume cover such topics as communications media, software design, teacher roles and attitudes, televised instruction, videotape recordings, instructional media,…

  15. Audiovisuals and non-print learning resources in a health sciences library.

    PubMed

    Robinow, B H

    1979-03-01

    The MD undergraduate program at McMaster University, based entirely on self-instruction, requires the provision of all kinds of learning resources. How these are assembled and made available is described. Emphasis is placed on the practical library problems of cataloging, shelving, maintenance, and distribution of audiovisual materials including pathology specimens and 'problem boxes' as well as the more usual films, videotapes and slide/tape sets. Evaluation is discussed briefly. PMID:85624

  16. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    PubMed

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes.

  17. Enhanced sensory re-learning after nerve repair using 3D audio-visual signals and kinaesthesia--preliminary results.

    PubMed

    Schmidhammer, R; Hausner, T; Kröpfl, A; Huber, W; Hopf, R; Leixnering, M; Herz, H; Redl, H

    2007-01-01

    Sensory re-learning methods and basics on cortical reorganization after peripheral nerve lesion are well documented. The aim of enhanced sensory re-learning using 3D audio-visual signals and kinaesthetic training is the augmentation of cognitive memory (visual and acoustic sensory memory) and cognitive function for the improvement of cerebral plasticity processes and starts as soon as possible after nerve repair. Preliminary results are shown.

  18. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  19. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    PubMed

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience.

  20. Problem Order Implications for Learning

    ERIC Educational Resources Information Center

    Li, Nan; Cohen, William W.; Koedinger, Kenneth R.

    2013-01-01

    The order of problems presented to students is an important variable that affects learning effectiveness. Previous studies have shown that solving problems in a blocked order, in which all problems of one type are completed before the student is switched to the next problem type, results in less effective performance than does solving the problems…

  1. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    PubMed

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training. PMID:26671710

  2. The Impact of Audiovisual Feedback on the Learning Outcomes of a Remote and Virtual Laboratory Class

    ERIC Educational Resources Information Center

    Lindsay, E.; Good, M.

    2009-01-01

    Remote and virtual laboratory classes are an increasingly prevalent alternative to traditional hands-on laboratory experiences. One of the key issues with these modes of access is the provision of adequate audiovisual (AV) feedback to the user, which can be a complicated and resource-intensive challenge. This paper reports on a comparison of two…

  3. Adult Learning Strategies and Approaches (ALSA). Resources for Teachers of Adults. A Handbook of Practical Advice on Audio-Visual Aids and Educational Technology for Tutors and Organisers.

    ERIC Educational Resources Information Center

    Cummins, John; And Others

    This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…

  4. AUDIOVISUAL HANDBOOK.

    ERIC Educational Resources Information Center

    JOHNSON, HARRY A.

    UNDERGRADUATE AND GRADUATE ACADEMIC OFFERINGS IN THE DEPARTMENT OF AUDIOVISUAL EDUCATION ARE LISTED, AND THE INSERVICE FACULTY TRAINING PROGRAM AND THE EXTENSION AND CONSULTANT SERVICES ARE DESCRIBED. GENERAL SERVICES OFFERED BY THE CENTER ARE A COLLEGE FILM SHOWING SERVICE, A CHILDREN'S THEATRE, A PRODUCTION WORKSHOP, AN EMBOSOGRAF PROCESS,…

  5. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  6. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  7. Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds.

    PubMed

    Jesse, Alexandra; Johnson, Elizabeth K

    2016-05-01

    Analyses of caregiver-child communication suggest that an adult tends to highlight objects in a child's visual scene by moving them in a manner that is temporally aligned with the adult's speech productions. Here, we used the looking-while-listening paradigm to examine whether 25-month-olds use audiovisual temporal alignment to disambiguate and learn novel word-referent mappings in a difficult word-learning task. Videos of two equally interesting and animated novel objects were simultaneously presented to children, but the movement of only one of the objects was aligned with an accompanying object-labeling audio track. No social cues (e.g., pointing, eye gaze, touch) were available to the children because the speaker was edited out of the videos. Immediately afterward, toddlers were presented with still images of the two objects and asked to look at one or the other. Toddlers looked reliably longer to the labeled object, demonstrating their acquisition of the novel word-referent mapping. A control condition showed that children's performance was not solely due to the single unambiguous labeling that had occurred at experiment onset. We conclude that the temporal link between a speaker's utterances and the motion they imposed on the referent object helps toddlers to deduce a speaker's intended reference in a difficult word-learning scenario. In combination with our previous work, these findings suggest that intersensory redundancy is a source of information used by language users of all ages. That is, intersensory redundancy is not just a word-learning tool used by young infants. PMID:26765249

  8. Enhanced Multisensory Integration and Motor Reactivation after Active Motor Learning of Audiovisual Associations

    ERIC Educational Resources Information Center

    Butler, Andrew J.; James, Thomas W.; James, Karin Harman

    2011-01-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…

  9. Audiovisual Interaction

    NASA Astrophysics Data System (ADS)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  10. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  11. Learning with Hyperlinked Videos--Design Criteria and Efficient Strategies for Using Audiovisual Hypermedia

    ERIC Educational Resources Information Center

    Zahn, Carmen; Barquero, Beatriz; Schwan, Stephan

    2004-01-01

    In this article, we discuss the results of an experiment in which we studied two apparently conflicting classes of design principles for instructional hypervideos: (1) those principles derived from work on multimedia learning that emphasize spatio-temporal contiguity and (2) those originating from work on hypermedia learning that favour…

  12. Planning and Producing Audiovisual Materials.

    ERIC Educational Resources Information Center

    Kemp, Jerrold E.

    The first few chapters of this book are devoted to an examination of the changing character of audiovisual materials; instructional design and the selection of media to serve specific objectives; and principles of perception, communication, and learning. Relevant research findings in the field are reviewed. The basic techniques of planning…

  13. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  14. Manifold Learning by Preserving Distance Orders

    PubMed Central

    Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-01-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis. PMID:25045195

  15. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. PMID:27003546

  16. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.

  17. Virtual Attendance: Analysis of an Audiovisual over IP System for Distance Learning in the Spanish Open University (UNED)

    ERIC Educational Resources Information Center

    Vazquez-Cano, Esteban; Fombona, Javier; Fernandez, Alberto

    2013-01-01

    This article analyzes a system of virtual attendance, called "AVIP" (AudioVisual over Internet Protocol), at the Spanish Open University (UNED) in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the…

  18. Learning biases predict a word order universal.

    PubMed

    Culbertson, Jennifer; Smolensky, Paul; Legendre, Géraldine

    2012-03-01

    How recurrent typological patterns, or universals, emerge from the extensive diversity found across the world's languages constitutes a central question for linguistics and cognitive science. Recent challenges to a fundamental assumption of generative linguistics-that universal properties of the human language acquisition faculty constrain the types of grammatical systems which can occur-suggest the need for new types of empirical evidence connecting typology to biases of learners. Using an artificial language learning paradigm in which adult subjects are exposed to a mix of grammatical systems (similar to a period of linguistic change), we show that learners' biases mirror a word-order universal, first proposed by Joseph Greenberg, which constrains typological patterns of adjective, numeral, and noun ordering. We briefly summarize the results of a probabilistic model of the hypothesized biases and their effect on learning, and discuss the broader implications of the results for current theories of the origins of cross-linguistic word-order preferences.

  19. Learning biases predict a word order universal.

    PubMed

    Culbertson, Jennifer; Smolensky, Paul; Legendre, Géraldine

    2012-03-01

    How recurrent typological patterns, or universals, emerge from the extensive diversity found across the world's languages constitutes a central question for linguistics and cognitive science. Recent challenges to a fundamental assumption of generative linguistics-that universal properties of the human language acquisition faculty constrain the types of grammatical systems which can occur-suggest the need for new types of empirical evidence connecting typology to biases of learners. Using an artificial language learning paradigm in which adult subjects are exposed to a mix of grammatical systems (similar to a period of linguistic change), we show that learners' biases mirror a word-order universal, first proposed by Joseph Greenberg, which constrains typological patterns of adjective, numeral, and noun ordering. We briefly summarize the results of a probabilistic model of the hypothesized biases and their effect on learning, and discuss the broader implications of the results for current theories of the origins of cross-linguistic word-order preferences. PMID:22208785

  20. Audiovisual Speech Recalibration in Children

    ERIC Educational Resources Information Center

    van Linden, Sabine; Vroomen, Jean

    2008-01-01

    In order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on…

  1. Multi-sensory learning and learning to read.

    PubMed

    Blomert, Leo; Froyen, Dries

    2010-09-01

    The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar

  2. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  3. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  4. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  5. Lifelong Learning and the New Educational Order.

    ERIC Educational Resources Information Center

    Field, John

    This document explores the sudden increase in interest in lifelong learning among policymakers in Great Britain and elsewhere; existing patterns of participation in lifelong learning; the measures being developed to promote lifelong learning; and the prospects of achieving a viable learning society. The following are among the topics discussed in…

  6. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    ERIC Educational Resources Information Center

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  7. Time and Order Effects on Causal Learning

    ERIC Educational Resources Information Center

    Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.

    2006-01-01

    Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…

  8. An audiovisual emotion recognition system

    NASA Astrophysics Data System (ADS)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  9. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  10. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans. PMID:27010716

  11. Mobilising Concepts: Intellectual Technologies in the Ordering of Learning Societies

    ERIC Educational Resources Information Center

    Edwards, Richard

    2004-01-01

    Lifelong learning and a learning society are important planks of European Union (EU) policy. Drawing upon the work of Foucault and Rose, this article examines some of the intellectual technologies that are deployed in the ordering of these policy goals. It argues that research is one such technology and examines EU Framework Projects to explore…

  12. AUDIOVISUAL EQUIPMENT STANDARDS.

    ERIC Educational Resources Information Center

    PATTERSON, PIERCE E.; AND OTHERS

    RECOMMENDED STANDARDS FOR AUDIOVISUAL EQUIPMENT WERE PRESENTED SEPARATELY FOR GRADES KINDERGARTEN THROUGH SIX, AND FOR JUNIOR AND SENIOR HIGH SCHOOLS. THE ELEMENTARY SCHOOL EQUIPMENT CONSIDERED WAS THE FOLLOWING--CLASSROOM LIGHT CONTROL, MOTION PICTURE PROJECTOR WITH MOBILE STAND AND SPARE REELS, COMBINATION 2 INCH X 2 INCH SLIDE AND FILMSTRIP…

  13. AUDIOVISUAL SERVICES CATALOG.

    ERIC Educational Resources Information Center

    Stockton Unified School District, CA.

    A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…

  14. Audiovisual Techniques Handbook.

    ERIC Educational Resources Information Center

    Hess, Darrel

    This handbook focuses on the use of 35mm slides for audiovisual presentations, particularly as an alternative to the more expensive and harder to produce medium of video. Its point of reference is creating slide shows about experiences in the Peace Corps; however, recommendations offered about both basic production procedures and enhancements are…

  15. Audiovisual Materials in Mathematics.

    ERIC Educational Resources Information Center

    Raab, Joseph A.

    This pamphlet lists five thousand current, readily available audiovisual materials in mathematics. These are grouped under eighteen subject areas: Advanced Calculus, Algebra, Arithmetic, Business, Calculus, Charts, Computers, Geometry, Limits, Logarithms, Logic, Number Theory, Probability, Soild Geometry, Slide Rule, Statistics, Topology, and…

  16. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  17. Selected Mental Health Audiovisuals.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  18. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.

  19. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  20. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    ERIC Educational Resources Information Center

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  1. Evaluating an Experimental Audio-Visual Module Programmed to Teach a Basic Anatomical and Physiological System.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    The learning efficiency and effectiveness of teaching an anatomical and physiological system to Air Force enlisted trainees utilizing an experimental audiovisual programed module was compared to that of a commercial linear programed text. It was demonstrated that the audiovisual programed approach to training was more efficient than and equally as…

  2. Promoting higher order thinking skills using inquiry-based learning

    NASA Astrophysics Data System (ADS)

    Madhuri, G. V.; S. S. N Kantamreddi, V.; Goteti, L. N. S. Prakash

    2012-05-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in chemistry. Laboratory exercises are designed based on Bloom's taxonomy and a just-in-time facilitation approach is used. A pre-laboratory discussion outlining the theory of the experiment and its relevance is carried out to enable the students to analyse real-life problems. The performance of the students is assessed based on their ability to perform the experiment, design new experiments and correlate practical utility of the course module with real life. The novelty of the present approach lies in the fact that the learning outcomes of the existing experiments are achieved through establishing a relationship with real-world problems.

  3. Multiple-Try Feedback and Higher-Order Learning Outcomes

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Koul, Ravinder

    2005-01-01

    Although feedback is an important component of computer-based instruction (CBI), the effects of feedback on higher-order learning outcomes are not well understood. Several meta-analyses provide two rules of thumb: any feedback is better than no feedback and feedback with more information is better than feedback with less information. …

  4. [Second-order retrospective revaluation in human contingency learning].

    PubMed

    Numata, Keitaro; Shimazaki, Tsuneo

    2009-04-01

    We demonstrated second-order retrospective revaluation with three cues (T1, T2, and C) and an outcome, in human contingency learning. Experimental task, PC-controlled video game in which participants were required to observe about the relations between firing missiles and the tank destruction, consisted of three training phases and two rating phases. Groups C+ and C- consisted of same first two training phases, CT+ (cues C and T with an outcome) and T1T2+ followed by C+, or C- training for Groups C+, C-, respectively. In rating phases, it is clearly demonstrated that the judgment of predictive value for the outcome of the T2 were higher by C+ training (second-order unovershadowing) and lowered by C- training (second-order backward blocking). The results for Groups RC+ and RC-, in which the orders of the first two training phase for Groups C+ and C- were interchanged, also showed second-order unovershadowing and second-order backward blocking. These results, the robustness of second-order retrospective revaluation against the order of the first training phases, can be explained by the extended comparator hypothesis and probabilistic contrast model. However, these results cannot be explained by traditional associative learning models. PMID:19489431

  5. [Cultural heritage and audiovisual creation in the Arab world].

    PubMed

    Aziza, M

    1979-01-01

    Audiovisual creation is facing in Arab countries problems arising from the use of imported techniques in order to reconstitute or transform their own reality. Arab audiovisual producers see this technique as an easy and efficient way to reproduce reality or construct conventionally an artificial universe. Sometimes, audiovisuals have an absolute suggestion power; sometimes, these techniques are faced with total incredulity. From a diffusion point of view, audiovisuals in the Arab world have a very specific status. The effects of television, studied by western researchers in their cultural environment, are not reproduced in the same fashion in the Arab cultural world. In the Arab world, the word very often still competes successfully with the picture, even after the appearance and adoption of mass media. Finally, one must mention a very interesting situation resulting from a linguistic phenomenon which is specific to the Arab world: the existence of 2 communication languages, one noble but little used, the other dialectical but popular. In all Arab countries, the News, the most political program, is broadcasted in the classical language, despite the danger of meaning distortion in the least educated public. The reason is probably that the classical Arab language enjoys a sacred status. Arab audiovisuals are facing several obstacles to their total and autonomous realization. The contribution of the Arab audiovisual producers is relatively modest, compared to some other areas of cultural creation. Arab film-making is looking more and more for the cooperation of contemporary writers. Contemporary literature is a considerable source for the renewal of Arab audiovisual expression. A relationship between film and popular cultural heritage could be very usefully established in both directions. Audiovisuals should treat popular cultural manifestations as a global social fact on several significant levels. PMID:12261391

  6. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  7. Learn locally, think globally: Exemplar variability supports higher-order generalization and word learning

    PubMed Central

    Perry, Lynn K.; Samuelson, Larissa K.; Malloy, Lisa M.; Schiffer, Ryan N.

    2011-01-01

    Research suggests variability supports successful categorization, however, the scope of variability’s support at the level of higher-order generalization remains unexplored. A longitudinal study examined the role of exemplar variability in first- and second-order generalization in the context of early nominal-category learning. Sixteen eighteen-month-old children were taught twelve categories. Half were taught with sets of highly similar exemplars; half with sets of more variable exemplars. Participants’ learning and generalization of trained labels and their development of more general word-learning biases were tested. All children learned labels for trained exemplars, but children trained with variable exemplars generalized to novel exemplars of these categories, developed a discriminating word-learning bias generalizing labels of novel solid objects by shape and nonsolids by material, and accelerated in vocabulary acquisition. These data demonstrate that variability leads to better abstraction of individual and global category organization, increasing learning outside the laboratory. PMID:21106892

  8. Predicting perceptual learning from higher-order cortical processing.

    PubMed

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making.

  9. Machine learning using a higher order correlation network

    SciTech Connect

    Lee, Y.C.; Doolen, G.; Chen, H.H.; Sun, G.Z.; Maxwell, T.; Lee, H.Y.

    1986-01-01

    A high-order correlation tensor formalism for neural networks is described. The model can simulate auto associative, heteroassociative, as well as multiassociative memory. For the autoassociative model, simulation results show a drastic increase in the memory capacity and speed over that of the standard Hopfield-like correlation matrix methods. The possibility of using multiassociative memory for a learning universal inference network is also discussed. 9 refs., 5 figs.

  10. School Building Design and Audio-Visual Resources.

    ERIC Educational Resources Information Center

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  11. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. PMID:25269620

  12. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  13. Learn locally, think globally. Exemplar variability supports higher-order generalization and word learning.

    PubMed

    Perry, Lynn K; Samuelson, Larissa K; Malloy, Lisa M; Schiffer, Ryan N

    2010-12-01

    Research suggests that variability of exemplars supports successful object categorization; however, the scope of variability's support at the level of higher-order generalization remains unexplored. Using a longitudinal study, we examined the role of exemplar variability in first- and second-order generalization in the context of nominal-category learning at an early age. Sixteen 18-month-old children were taught 12 categories. Half of the children were taught with sets of highly similar exemplars; the other half were taught with sets of dissimilar, variable exemplars. Participants' learning and generalization of trained labels and their development of more general word-learning biases were tested. All children were found to have learned labels for trained exemplars, but children trained with variable exemplars generalized to novel exemplars of these categories, developed a discriminating word-learning bias generalizing labels of novel solid objects by shape and labels of nonsolid objects by material, and accelerated in vocabulary acquisition. These findings demonstrate that object variability leads to better abstraction of individual and global category organization, which increases learning outside the laboratory. PMID:21106892

  14. Audio-Visual Aids: Historians in Blunderland.

    ERIC Educational Resources Information Center

    Decarie, Graeme

    1988-01-01

    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  15. [Audio-visual aids and tropical medicine].

    PubMed

    Morand, J J

    1989-01-01

    The author presents a list of the audio-visual productions about Tropical Medicine, as well as of their main characteristics. He thinks that the audio-visual educational productions are often dissociated from their promotion; therefore, he invites the future creator to forward his work to the Audio-Visual Health Committee.

  16. Towards Postmodernist Television: INA's Audiovisual Magazine Programmes.

    ERIC Educational Resources Information Center

    Boyd-Bowman, Susan

    Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines, "Hieroglyphes,"…

  17. A Distance Learning Model for Teaching Higher Order Thinking

    ERIC Educational Resources Information Center

    Notar, Charles E.; Wilson, Janell D.; Montgomery, Mary K.

    2005-01-01

    A teaching model for distance learning (DL) requires a system (a technology) and process (a way of linking resources) that makes distance learning no different than learning in the traditional classroom. The process must support a design that provides for learning, ensures maximum transfer, and is student-centered. The process must provide a…

  18. Electrophysiological evidence for speech-specific audiovisual integration.

    PubMed

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode.

  19. Perceived synchrony for realistic and dynamic audiovisual events.

    PubMed

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  20. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  1. Perceived synchrony for realistic and dynamic audiovisual events.

    PubMed

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  2. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  3. Order of Presentation Effects in Learning Color Categories

    ERIC Educational Resources Information Center

    Sandhofer, Catherine M.; Doumas, Leonidas A. A.

    2008-01-01

    Two studies, an experimental category learning task and a computational simulation, examined how sequencing training instances to maximize comparison and memory affects category learning. In Study 1, 2-year-old children learned color categories with three training conditions that varied in how categories were distributed throughout training and…

  4. Current Developments in Audiovisual Cataloging.

    ERIC Educational Resources Information Center

    Graham, Paul

    1985-01-01

    This paper highlights significant advances in audiovisual cataloging theory and practice: development of "Anglo-American Cataloging Rules" (second edition); revision of the MARC Films Format; and project to provide cataloging-in-publication for microcomputer software. Evolution of rules and practices as an outgrowth of needs of the community is…

  5. Audiovisual Resources for Instructional Development.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.; And Others

    Provided is a compilation of recently annotated audiovisual materials which present techniques, models, or other specific information that can aid in providing comprehensive services to the handicapped. Entries which include a brief description, name of distributor, technical information, and cost are presented alphabetically by title in eight…

  6. A Computer Generated Audiovisuals Catalog.

    ERIC Educational Resources Information Center

    Bogen, Betty

    Eccles Medical Sciences Library at the University of Utah has developed a computer-generated catalog for its audiovisual health and medical materials. The catalog contains four sections: (1) the main listing of type of media, with descriptions, call numbers, and Medical Subject Headings (MeSH) used for each item; (2) a listing by title, with call…

  7. Preventive Maintenance Handbook. Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Educational Products Information Exchange Inst., Stony Brook, NY.

    The preventive maintenance system for audiovisual equipment presented in this handbook is designed by specialists so that it can be used by nonspecialists in school sites. The report offers specific advice on saftey factors and also lists major problems that should not be handled by nonspecialists. Other aspects of a preventive maintenance system…

  8. Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This list of audiovisual materials for environmental education was prepared by the State of Minnesota, Department of Education, Division of Instruction, to accompany the pilot curriculum in environmental education. The majority of the materials listed are available from the University of Minnesota, or from state or federal agencies. The…

  9. Encouraging Higher-Order Thinking in General Chemistry by Scaffolding Student Learning Using Marzano's Taxonomy

    ERIC Educational Resources Information Center

    Toledo, Santiago; Dubas, Justin M.

    2016-01-01

    An emphasis on higher-order thinking within the curriculum has been a subject of interest in the chemical and STEM literature due to its ability to promote meaningful, transferable learning in students. The systematic use of learning taxonomies could be a practical way to scaffold student learning in order to achieve this goal. This work proposes…

  10. The effect of visual apparent motion on audiovisual simultaneity.

    PubMed

    Kwon, Jinhwan; Ogawa, Ken-ichiro; Miyake, Yoshihiro

    2014-01-01

    Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal

  11. Flipping & Clicking Your Way to Higher-Order Learning

    ERIC Educational Resources Information Center

    Garver, Michael S.; Roberts, Brian A.

    2013-01-01

    This innovative system of teaching and learning includes the implementation of two effective learning technologies: podcasting ("flipping") and classroom response systems ("clicking"). Students watch lectures in podcast format before coming to class, which allows the "entire" class period to be devoted to active…

  12. Learning in Order To Teach in Chicxulub Puerto, Yucatan, Mexico.

    ERIC Educational Resources Information Center

    Wilber, Cynthia J.

    2000-01-01

    Describes a community-based computer education program for the young people (and adults) of Chicxulub Puerto, a small fishing village in Yucatan, Mexico. Notes the children learn Maya, Spanish, and English in the context of learning computer and telecommunication skills. Concludes that access to the Internet has made a profound difference in a…

  13. Higher Order Learning for All: A Report by the Council of Chief State School Officers on Restructuring Learning.

    ERIC Educational Resources Information Center

    Council of Chief State School Officers, Washington, DC.

    Serving as a bridge between research in the cognitive sciences and educational policymakers and practitioners, the first part of this report reviews recent research on thinking and learning. Findings emphasize work on higher order learning, the role of knowledge in learning, and the development of a disposition for thinking. Topics detailed…

  14. Conceptual similarity promotes generalization of higher order fear learning

    PubMed Central

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to test for fear generalization, as measured by the skin conductance response. Results showed enhanced fear generalization that correlated with trait anxiety levels in the group that learned an association between conceptually similar stimuli. These findings suggest that conceptual representations of conditional stimuli influence human fear learning processes. PMID:21330378

  15. USE OF NEW AUDIO-VISUAL TECHNIQUES TO TEACH MENTALLY-RETARDED CHILDREN.

    ERIC Educational Resources Information Center

    ROSS, DOROTHEA M.

    DEPENDENCY LEARNING, THE ACQUISITION AND DEVELOPMENT OF PERSONAL INTERRELATIONSHIP VALUES, WAS STUDIED AS A TECHNIQUE FOR FOSTERING AUDIOVISUAL ACADEMIC LEARNING AMONG 54 YOUNG, EDUCABLE MENTAL RETARDATES. SOME OF THESE SUBJECTS WERE TAUGHT TO VALUE SIMULATED DEPENDENCY MODELS. THESE MODELS WERE CONSISTENTLY PAIRED WITH SUCH REWARDING STIMULI AS…

  16. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    ERIC Educational Resources Information Center

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  17. Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm

    PubMed Central

    Wang, Jinzhao

    2016-01-01

    We present a novel algorithm for optimizing the order in which Chinese characters are learned, one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships. We show that our work outperforms previously published orders and algorithms. Our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order. PMID:27706234

  18. Order effects in contingency learning: the role of task complexity.

    PubMed

    Marsh, Jessecae K; Ahn, Woo-Kyoung

    2006-04-01

    Dennis and Ahn (2001) found that during contingency learning, initial evidence influences causal judgments more than does later evidence (a primacy effect), whereas López, Shanks, Almaraz, and Fernández (1998) found the opposite (a recency effect). We propose that in contingency learning, people use initial evidence to develop an anchoring hypothesis that tends to be underadjusted by later evidence, resulting in a primacy effect. Thus, factors interfering with initial hypothesis development, such as simultaneously learning too many contingencies, as in López et al., would reduce the primacy effect. Experiment 1 showed a primacy effect with learning contingencies involving only one outcome but no primacy effect with two outcomes. Experiment 2 demonstrated that the magnitude of the primacy effect correlated with participants' verbal working memory capacity. It is concluded that a critical moderator for exhibition of the primacy effect is task complexity, presumably because it interferes with initial hypothesis development.

  19. Bilingualism affects audiovisual phoneme identification.

    PubMed

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  20. Multi-strategy learning of search control for partial-order planning

    SciTech Connect

    Estlin, T.A.; Mooney, R.J.

    1996-12-31

    Most research in planning and learning has involved linear, state-based planners. This paper presents SCOPE, a system for learning search-control rules that improve the performance of a partial-order planner. SCOPE integrates explanation-based and inductive learning techniques to acquire control rules for a partial-order planner. Learned rules are in the form of selection heuristics that help the planner choose between competing plan refinements. Specifically, SCOPE learns domain-specific control rules for a version of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains.

  1. Conceptual Similarity Promotes Generalization of Higher Order Fear Learning

    ERIC Educational Resources Information Center

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to…

  2. Catalog of Audiovisual Materials Related to Rehabilitation.

    ERIC Educational Resources Information Center

    Mann, Joe, Ed.; Henderson, Jim, Ed.

    An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…

  3. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping.

  4. Audio-Visual Aids in Universities

    ERIC Educational Resources Information Center

    Douglas, Jackie

    1970-01-01

    A report on the proceedings and ideas expressed at a one day seminar on "Audio-Visual Equipment--Its Uses and Applications for Teaching and Research in Universities." The seminar was organized by England's National Committee for Audio-Visual Aids in Education in conjunction with the British Universities Film Council. (LS)

  5. Solar Energy Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.

    This directory presents an annotated bibliography of non-print information resources dealing with solar energy. The document is divided by type of audio-visual medium, including: (1) Films, (2) Slides and Filmstrips, and (3) Videotapes. A fourth section provides addresses and telephone numbers of audiovisual aids sources, and lists the page…

  6. Audio-visual gender recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  7. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of...

  8. Placing order in space: the SNARC effect in serial learning.

    PubMed

    Previtali, Paola; de Hevia, Maria Dolores; Girelli, Luisa

    2010-03-01

    The SNARC effect, consisting of a systematic association between numbers and lateralized response, reflects the mental representation of magnitude along a left-to-right mental number line (Dehaene et al. in J Exp Psychol 122:371-396, 1993). Critically, this effect has been reported in the classification of overlearned non-numerical sequences such as letters, days and months (Gevers et al. in Cognition 87:B87-B95, 2003 and Cortex 40:171-172, 2004) suggesting that ordinal, rather than magnitude information, is critical for spatial coding. This study tests the hypothesis of an oriented spatial representation as the privileged way of mentally organizing serial information, by looking for stimulus-response compatibility effects in the processing of a newly acquired arbitrary sequence. Here we report an association between ordinal position of the items and spatial response preference for both order-relevant and order-irrelevant tasks. These results suggest that any ordered information, even when order is not intrinsically relevant to it, is spontaneously mapped in the representational space. This spatial representation is likely to acquire a left-to-right orientation, at least in western cultures.

  9. Audiovisual Materials and Techniques for Teaching Foreign Languages: Recent Trends and Activities.

    ERIC Educational Resources Information Center

    Parks, Carolyn

    Recent experimentation with audio-visual (A-V) materials has provided insight into the language learning process. Researchers and teachers alike have recognized the importance of using A-V materials to achieve goals related to meaningful and relevant communication, retention and recall of language items, non-verbal aspects of communication, and…

  10. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    ERIC Educational Resources Information Center

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  11. Audiovisual Resources for Teaching Instructional Technology; an Annotated List of Materials.

    ERIC Educational Resources Information Center

    Ely, Donald P., Ed.; Beilby, Albert, Ed.

    The audiovisual resources listed in this catalog cover 10 instructional-technology topics: administration; facilities; instructional design; learning and communication; media equipment; media production; media utilization; research; instructional techniques; and society, education, and technology. Any entry falling into more than one category is…

  12. When learning order affects sensitivity to base rates: challenges for theories of causal learning.

    PubMed

    Reips, Ulf-Dietrich; Waldmann, Michael R

    2008-01-01

    In three experiments we investigated whether two procedures of acquiring knowledge about the same causal structure, predictive learning (from causes to effects) versus diagnostic learning (from effects to causes), would lead to different base-rate use in diagnostic judgments. Results showed that learners are capable of incorporating base-rate information in their judgments regardless of the direction in which the causal structure is learned. However, this only holds true for relatively simple scenarios. When complexity was increased, base rates were only used after diagnostic learning, but were largely neglected after predictive learning. It could be shown that this asymmetry is not due to a failure of encoding base rates in predictive learning because participants in all conditions were fairly good at reporting them. The findings present challenges for all theories of causal learning.

  13. No Solid Empirical Evidence for the SOLID (Serial Order Learning Impairment) Hypothesis of Dyslexia

    ERIC Educational Resources Information Center

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    This article reports on 2 studies that attempted to replicate the findings of a study by Szmalec, Loncke, Page, and Duyck (2011) on Hebb repetition learning in dyslexic individuals, from which these authors concluded that dyslexics suffer from a deficit in long-term learning of serial order information. In 2 experiments, 1 on adolescents (N = 59)…

  14. Beyond Course Availability: An Investigation into Order and Concurrency Effects of Undergraduate Programming Courses on Learning.

    ERIC Educational Resources Information Center

    Urbaczewski, Andrew; Urbaczewski, Lise

    The objective of this study was to find the answers to two primary research questions: "Do students learn programming languages better when they are offered in a particular order, such as 4th generation languages before 3rd generation languages?"; and "Do students learn programming languages better when they are taken in separate semesters as…

  15. Strategic Learning in Youth with Traumatic Brain Injury: Evidence for Stall in Higher-Order Cognition

    ERIC Educational Resources Information Center

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Cook, Lori G.

    2009-01-01

    Little is known about strategic learning ability in preteens and adolescents with traumatic brain injury (TBI). Strategic learning is the ability to combine and synthesize details to form abstracted gist-based meanings, a higher-order cognitive skill associated with frontal lobe functions and higher classroom performance. Summarization tasks were…

  16. No rapid audiovisual recalibration in adults on the autism spectrum.

    PubMed

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  17. No rapid audiovisual recalibration in adults on the autism spectrum

    PubMed Central

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  18. Audiovisual temporal fusion in 6-month-old infants.

    PubMed

    Kopp, Franziska

    2014-07-01

    The aim of this study was to investigate neural dynamics of audiovisual temporal fusion processes in 6-month-old infants using event-related brain potentials (ERPs). In a habituation-test paradigm, infants did not show any behavioral signs of discrimination of an audiovisual asynchrony of 200 ms, indicating perceptual fusion. In a subsequent EEG experiment, audiovisual synchronous stimuli and stimuli with a visual delay of 200 ms were presented in random order. In contrast to the behavioral data, brain activity differed significantly between the two conditions. Critically, N1 and P2 latency delays were not observed between synchronous and fused items, contrary to previously observed N1 and P2 latency delays between synchrony and perceived asynchrony. Hence, temporal interaction processes in the infant brain between the two sensory modalities varied as a function of perceptual fusion versus asynchrony perception. The visual recognition components Pb and Nc were modulated prior to sound onset, emphasizing the importance of anticipatory visual events for the prediction of auditory signals. Results suggest mechanisms by which young infants predictively adjust their ongoing neural activity to the temporal synchrony relations to be expected between vision and audition.

  19. Role of audiovisual synchrony in driving head orienting responses.

    PubMed

    Ho, Cristy; Gray, Rob; Spence, Charles

    2013-06-01

    Many studies now suggest that optimal multisensory integration sometimes occurs under conditions where auditory and visual stimuli are presented asynchronously (i.e. at asynchronies of 100 ms or more). Such observations lead to the suggestion that participants' speeded orienting responses might be enhanced following the presentation of asynchronous (as compared to synchronous) peripheral audiovisual spatial cues. Here, we report a series of three experiments designed to investigate this issue. Upon establishing the effectiveness of bimodal cuing over the best of its unimodal components (Experiment 1), participants had to make speeded head-turning or steering (wheel-turning) responses toward the cued direction (Experiment 2), or an incompatible response away from the cue (Experiment 3), in response to random peripheral audiovisual stimuli presented at stimulus onset asynchronies ranging from -100 to 100 ms. Race model inequality analysis of the results (Experiment 1) revealed different mechanisms underlying the observed multisensory facilitation of participants' head-turning versus steering responses. In Experiments 2 and 3, the synchronous presentation of the component auditory and visual cues gave rise to the largest facilitation of participants' response latencies. Intriguingly, when the participants had to subjectively judge the simultaneity of the audiovisual stimuli, the point of subjective simultaneity occurred when the auditory stimulus lagged behind the visual stimulus by 22 ms. Taken together, these results appear to suggest that the maximally beneficial behavioural (head and manual) orienting responses resulting from peripherally presented audiovisual stimuli occur when the component signals are presented in synchrony. These findings suggest that while the brain uses precise temporal synchrony in order to control its orienting responses, the system that the human brain uses to consciously judge synchrony appears to be less fine tuned.

  20. A checklist for planning and designing audiovisual facilities in health sciences libraries.

    PubMed Central

    Holland, G J; Bischoff, F A; Foxman, D S

    1984-01-01

    Developed by an MLA/HeSCA (Health Sciences Communications Association) joint committee, this checklist is intended to serve as a conceptual framework for planning a new or renovated audiovisual facility in a health sciences library. Emphasis is placed on the philosophical and organizational decisions that must be made about an audiovisual facility before the technical or spatial decisions can be wisely made. Specific standards for facilities or equipment are not included. The first section focuses on health sciences library settings. Ideas presented in the remaining sections could apply to academic learning resource center environments as well. A bibliography relating to all aspects of audiovisual facilities planning and design is included with references to specific sections of the checklist. PMID:6208957

  1. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  2. High-order behaviour in learning gate networks with lateral inhibition.

    PubMed

    Blanzieri, E; Grandi, F; Maio, D

    1996-01-01

    In this work we present a neural network model incorporating activity-dependent presynaptic facilitation with multidimensional inputs. The processing unit used is based on a slightly simplified version of the Learning Gate Model proposed by Ciaccia et al. (1992). The network topology integrates a well-known biological neural circuit with a lateral inhibition connection subnet. By means of simulation experiments, we show that the proposed networks exhibit basic and high-order features of associative learning. In particular, overshadowing and blocking are reproduced in the presence of both noise-free and noisy inputs. The role of noise in the development of high-order learning capabilities is also discussed.

  3. U.S. Government Films, 1971 Supplement; A Catalog of Audiovisual Materials for Rent and Sale by the National Audiovisual Center.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    The first edition of the National Audiovisual Center sales catalog (LI 003875) is updated by this supplement. Changes in price and order number as well as deletions from the 1969 edition, are noted in this 1971 version. Purchase and rental information for the sound films and silent filmstrips is provided. The broad subject categories are:…

  4. Perception of Dynamic and Static Audiovisual Sequences in 3- and 4-Month-Old Infants

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2008-01-01

    This study investigated perception of audiovisual sequences in 3- and 4-month-old infants. Infants were habituated to sequences consisting of moving/sounding or looming/sounding objects and then tested for their ability to detect changes in the order of the objects, sounds, or both. Results showed that 3-month-olds perceived the order of 3-element…

  5. The Current Status of Federal Audiovisual Policy and How These Policies Affect the National Audiovisual Center.

    ERIC Educational Resources Information Center

    Flood, R. Kevin

    The National Audiovisual Center was established in 1968 to provide a single organizational unit that serves as a central information point on completed audiovisual materials and a central sales point for the distribution of media that were produced by or for federal agencies. This speech describes the services the center can provide users of…

  6. An Investigation of Higher-Order Thinking Skills in Smaller Learning Community Social Studies Classrooms

    ERIC Educational Resources Information Center

    Fischer, Christopher; Bol, Linda; Pribesh, Shana

    2011-01-01

    This study investigated the extent to which higher-order thinking skills are promoted in social studies classes in high schools that are implementing smaller learning communities (SLCs). Data collection in this mixed-methods study included classroom observations and in-depth interviews. Findings indicated that higher-order thinking was rarely…

  7. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    PubMed

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.

  8. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  9. Learning Partnership: Students and Faculty Learning Together to Facilitate Reflection and Higher Order Thinking in a Blended Course

    ERIC Educational Resources Information Center

    McDonald, Paige L.; Straker, Howard O.; Schlumpf, Karen S.; Plack, Margaret M.

    2014-01-01

    This article discusses a learning partnership among faculty and students to influence reflective practice in a blended course. Faculty redesigned a traditional face-to-face (FTF) introductory physician assistant course into a blended course to promote increased reflection and higher order thinking. Early student reflective writing suggested a need…

  10. Positive Emotion Facilitates Audiovisual Binding

    PubMed Central

    Kitamura, Miho S.; Watanabe, Katsumi; Kitagawa, Norimichi

    2016-01-01

    It has been shown that positive emotions can facilitate integrative and associative information processing in cognitive functions. The present study examined whether emotions in observers can also enhance perceptual integrative processes. We tested 125 participants in total for revealing the effects of emotional states and traits in observers on the multisensory binding between auditory and visual signals. Participants in Experiment 1 observed two identical visual disks moving toward each other, coinciding, and moving away, presented with a brief sound. We found that for participants with lower depressive tendency, induced happy moods increased the width of the temporal binding window of the sound-induced bounce percept in the stream/bounce display, while no effect was found for the participants with higher depressive tendency. In contrast, no effect of mood was observed for a simple audiovisual simultaneity discrimination task in Experiment 2. These results provide the first empirical evidence of a dependency of multisensory binding upon emotional states and traits, revealing that positive emotions can facilitate the multisensory binding processes at a perceptual level. PMID:26834585

  11. Audiovisual integration facilitates unconscious visual scene processing.

    PubMed

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration.

  12. Lip movements affect infants' audiovisual speech perception.

    PubMed

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  13. 29 CFR 2.12 - Audiovisual coverage permitted.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the...

  14. Govt. Pubs: U.S. Government Produced Audiovisual Materials.

    ERIC Educational Resources Information Center

    Korman, Richard

    1981-01-01

    Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…

  15. A Framework for Efficient Structured Max-Margin Learning of High-Order MRF Models.

    PubMed

    Komodakis, Nikos; Xiang, Bo; Paragios, Nikos

    2015-07-01

    We present a very general algorithm for structured prediction learning that is able to efficiently handle discrete MRFs/CRFs (including both pairwise and higher-order models) so long as they can admit a decomposition into tractable subproblems. At its core, it relies on a dual decomposition principle that has been recently employed in the task of MRF optimization. By properly combining such an approach with a max-margin learning method, the proposed framework manages to reduce the training of a complex high-order MRF to the parallel training of a series of simple slave MRFs that are much easier to handle. This leads to a very efficient and general learning scheme that relies on solid mathematical principles. We thoroughly analyze its theoretical properties, and also show that it can yield learning algorithms of increasing accuracy since it naturally allows a hierarchy of convex relaxations to be used for loss-augmented MAP-MRF inference within a max-margin learning approach. Furthermore, it can be easily adapted to take advantage of the special structure that may be present in a given class of MRFs. We demonstrate the generality and flexibility of our approach by testing it on a variety of scenarios, including training of pairwise and higher-order MRFs, training by using different types of regularizers and/or different types of dissimilarity loss functions, as well as by learning of appropriate models for a variety of vision tasks (including high-order models for compact pose-invariant shape priors, knowledge-based segmentation, image denoising, stereo matching as well as high-order Potts MRFs). PMID:26352450

  16. Learning assignment order of instances for the constrained K-means clustering algorithm.

    PubMed

    Hong, Yi; Kwong, Sam

    2009-04-01

    The sensitivity of the constrained K-means clustering algorithm (Cop-Kmeans) to the assignment order of instances is studied, and a novel assignment order learning method for Cop-Kmeans, termed as clustering Uncertainty-based Assignment order Learning Algorithm (UALA), is proposed in this paper. The main idea of UALA is to rank all instances in the data set according to their clustering uncertainties calculated by using the ensembles of multiple clustering algorithms. Experimental results on several real data sets with artificial instance-level constraints demonstrate that UALA can identify a good assignment order of instances for Cop-Kmeans. In addition, the effects of ensemble sizes on the performance of UALA are analyzed, and the generalization property of Cop-Kmeans is also studied.

  17. Mental representations of magnitude and order: a dissociation by sensorimotor learning.

    PubMed

    Badets, Arnaud; Boutin, Arnaud; Heuer, Herbert

    2015-05-01

    Numbers and spatially directed actions share cognitive representations. This assertion is derived from studies that have demonstrated that the processing of small- and large-magnitude numbers facilitates motor behaviors that are directed to the left and right, respectively. However, little is known about the role of sensorimotor learning for such number-action associations. In this study, we show that sensorimotor learning in a serial reaction-time task can modify the associations between number magnitudes and spatially directed movements. Experiments 1 and 3 revealed that this effect is present only for the learned sequence and does not transfer to a novel unpracticed sequence. Experiments 2 and 4 showed that the modification of stimulus-action associations by sensorimotor learning does not occur for other sets of ordered stimuli such as letters of the alphabet. These results strongly suggest that numbers and actions share a common magnitude representation that differs from the common order representation shared by letters and spatially directed actions. Only the magnitude representation, but not the order representation, can be modified episodically by sensorimotor learning.

  18. 76 FR 15311 - Legacy Learning Systems, Inc.; Analysis of Proposed Consent Order To Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-21

    ... Legacy Learning Systems, Inc.; Analysis of Proposed Consent Order To Aid Public Comment AGENCY: Federal... methods of competition. The attached Analysis to Aid Public Comment describes both the allegations in the... period of thirty (30) days. The following Analysis to Aid Public Comment describes the terms of...

  19. Learning and Generalization on Asynchrony and Order Tasks at Sound Offset: Implications for Underlying Neural Circuitry

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.

    2008-01-01

    Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…

  20. Higher-Order Thinking Development through Adaptive Problem-Based Learning

    ERIC Educational Resources Information Center

    Raiyn, Jamal; Tilchin, Oleg

    2015-01-01

    In this paper we propose an approach to organizing Adaptive Problem-Based Learning (PBL) leading to the development of Higher-Order Thinking (HOT) skills and collaborative skills in students. Adaptability of PBL is expressed by changes in fixed instructor assessments caused by the dynamics of developing HOT skills needed for problem solving,…

  1. "What Do I Do Here?": Higher Order Learning Effects of Enhancing Task Instructions

    ERIC Educational Resources Information Center

    Chamberlain, Susanna; Zuvela, Danni

    2014-01-01

    This paper reports the findings of a one-year research project focused on a series of structured interventions aimed at enhancing task instruction to develop students' understanding of higher assessment practices, and encouraging higher order learning. It describes the nature and iterations of the interventions, made into a large-enrolment online…

  2. Changes in Teaching in Order to Help Students with Learning Difficulties Improve in Cypriot Primary Classes

    ERIC Educational Resources Information Center

    Loizou, Florentia

    2016-01-01

    This article aims to explore what changes two Cypriot primary school teachers brought in their teaching in order to help students with learning difficulties improve in their classes. The study was qualitative and used non-participant observation in two primary classrooms in different primary schools and semi-structured interviews with the main…

  3. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution

    PubMed Central

    Phillips, Steven; Wilson, William H.

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other “structurally related” cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  4. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution.

    PubMed

    Phillips, Steven; Wilson, William H

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other "structurally related" cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  5. Linking memory and language: Evidence for a serial-order learning impairment in dyslexia.

    PubMed

    Bogaerts, Louisa; Szmalec, Arnaud; Hachmann, Wibke M; Page, Mike P A; Duyck, Wouter

    2015-01-01

    The present study investigated long-term serial-order learning impairments, operationalized as reduced Hebb repetition learning (HRL), in people with dyslexia. In a first multi-session experiment, we investigated both the persistence of a serial-order learning impairment as well as the long-term retention of serial-order representations, both in a group of Dutch-speaking adults with developmental dyslexia and in a matched control group. In a second experiment, we relied on the assumption that HRL mimics naturalistic word-form acquisition and we investigated the lexicalization of novel word-forms acquired through HRL. First, our results demonstrate that adults with dyslexia are fundamentally impaired in the long-term acquisition of serial-order information. Second, dyslexic and control participants show comparable retention of the long-term serial-order representations in memory over a period of 1 month. Third, the data suggest weaker lexicalization of newly acquired word-forms in the dyslexic group. We discuss the integration of these findings into current theoretical views of dyslexia.

  6. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  7. Coots use hatch order to learn to recognize and reject conspecific brood parasitic chicks.

    PubMed

    Shizuka, Daizaburo; Lyon, Bruce E

    2010-01-14

    Avian brood parasites and their hosts provide model systems for investigating links between recognition, learning, and their fitness consequences. One major evolutionary puzzle has continued to capture the attention of naturalists for centuries: why do hosts of brood parasites generally fail to recognize parasitic offspring after they have hatched from the egg, even when the host and parasitic chicks differ to almost comic degrees? One prominent theory to explain this pattern proposes that the costs of mistakenly learning to recognize the wrong offspring make recognition maladaptive. Here we show that American coots, Fulica americana, can recognize and reject parasitic chicks in their brood by using learned cues, despite the fact that the hosts and the brood parasites are of the same species. A series of chick cross-fostering experiments confirm that coots use first-hatched chicks in a brood as referents to learn to recognize their own chicks and then discriminate against later-hatched parasitic chicks in the same brood. When experimentally provided with the wrong reference chicks, coots can be induced to discriminate against their own offspring, confirming that the learning errors proposed by theory can exist. However, learning based on hatching order is reliable in naturally parasitized coot nests because host eggs hatch predictably ahead of parasite eggs. Conversely, a lack of reliable information may help to explain why the evolution of chick recognition is not more common in hosts of most interspecific brood parasites. PMID:20016486

  8. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    SciTech Connect

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  9. Planning and Producing Audiovisual Materials. Third Edition.

    ERIC Educational Resources Information Center

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  10. Audiovisual Prosody and Feeling of Knowing

    ERIC Educational Resources Information Center

    Swerts, M.; Krahmer, E.

    2005-01-01

    This paper describes two experiments on the role of audiovisual prosody for signalling and detecting meta-cognitive information in question answering. The first study consists of an experiment, in which participants are asked factual questions in a conversational setting, while they are being filmed. Statistical analyses bring to light that the…

  11. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  12. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  13. Active Methodology in the Audiovisual Communication Degree

    ERIC Educational Resources Information Center

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  14. Health Science Audiovisuals in Online Databases.

    ERIC Educational Resources Information Center

    Van Camp, Ann

    1980-01-01

    Provides descriptions of 14 databases that contain citations to audiovisual instructional materials: AGRICOLA, AVLINE, AVMARC, BIOETHICSLINE, CATLINE, CHILD ABUSE AND NEGLECT, DRUG INFO, ERIC, EXCEPTIONAL CHILD EDUCATION RESOURCES (ECER), LIBCON, NICEM, NICSEM/NIMIS, NIMH, and OCLC. Information for each includes subject content, update frequency,…

  15. Audio-Visual Materials for Chinese Studies.

    ERIC Educational Resources Information Center

    Ching, Eugene, Comp.; Ching, Nora C., Comp.

    This publication is designed for teachers of Chinese language and culture who are interested in using audiovisual materials to supplement classroom instruction. The listings objectively present materials which are available; the compilers have not attempted to evaluate them. Content includes historical studies, techniques of brush painting, myths,…

  16. A Selection of Audiovisual Materials on Disabilities.

    ERIC Educational Resources Information Center

    Mayo, Kathleen; Rider, Sheila

    Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…

  17. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  18. Audiovisual Instruction in Pediatric Pharmacy Practice.

    ERIC Educational Resources Information Center

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  19. Audiovisual Facilities in Schools in Japan Today.

    ERIC Educational Resources Information Center

    Ministry of Education, Tokyo (Japan).

    This paper summarizes the findings of a national survey conducted for the Ministry of Education, Science, and Culture in 1986 to determine the kinds of audiovisual equipment available in Japanese schools, together with the rate of diffusion for the various types of equipment, the amount of teacher participation in training for their use, and the…

  20. The Status of Audiovisual Materials in Networking.

    ERIC Educational Resources Information Center

    Coty, Patricia Ann

    1983-01-01

    The role of networks in correcting inadequate bibliographic control for audiovisual materials is discussed, citing efforts of Project Media Base, National Information Center for Educational Media, Consortium of University Film Centers, National Library of Medicine, National Agricultural Library, National Film Board of Canada, and bibliographic…

  1. Reduced audiovisual recalibration in the elderly.

    PubMed

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  2. Reduced audiovisual recalibration in the elderly

    PubMed Central

    Chan, Yu Man; Pianta, Michael J.; McKendrick, Allison M.

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22–32 years old) and 15 older (64–74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age. PMID:25221508

  3. Order short-term memory is not impaired in dyslexia and does not affect orthographic learning

    PubMed Central

    Staels, Eva; Van den Broeck, Wim

    2014-01-01

    This article reports two studies that investigate short-term memory (STM) deficits in dyslexic children and explores the relationship between STM and reading acquisition. In the first experiment, 36 dyslexic children and 61 control children performed an item STM task and a serial order STM task. The results of this experiment show that dyslexic children do not suffer from a specific serial order STM deficit. In addition, the results demonstrate that phonological processing skills are as closely related to both item STM and serial order STM. However, non-verbal intelligence was more strongly involved in serial order STM than in item STM. In the second experiment, the same two STM tasks were administered and reading acquisition was assessed by measuring orthographic learning in a group of 188 children. The results of this study show that orthographic learning is exclusively related to item STM and not to order STM. It is concluded that serial order STM is not the right place to look for a causal explanation of reading disability, nor for differences in word reading acquisition. PMID:25294996

  4. Order short-term memory is not impaired in dyslexia and does not affect orthographic learning.

    PubMed

    Staels, Eva; Van den Broeck, Wim

    2014-01-01

    This article reports two studies that investigate short-term memory (STM) deficits in dyslexic children and explores the relationship between STM and reading acquisition. In the first experiment, 36 dyslexic children and 61 control children performed an item STM task and a serial order STM task. The results of this experiment show that dyslexic children do not suffer from a specific serial order STM deficit. In addition, the results demonstrate that phonological processing skills are as closely related to both item STM and serial order STM. However, non-verbal intelligence was more strongly involved in serial order STM than in item STM. In the second experiment, the same two STM tasks were administered and reading acquisition was assessed by measuring orthographic learning in a group of 188 children. The results of this study show that orthographic learning is exclusively related to item STM and not to order STM. It is concluded that serial order STM is not the right place to look for a causal explanation of reading disability, nor for differences in word reading acquisition. PMID:25294996

  5. A Survey of British Research in Audio-Visual Aids, Supplement No. 2, 1974. (Including Cumulative Index 1945-1974).

    ERIC Educational Resources Information Center

    Rodwell, Susie, Comp.

    The second supplement to the new (1972) edition of the Survey of Research in Audiovisual Aids carried out in Great Britain covers the year 1974. Ten separate sections cover the areas of projected media, non-projected media, sound media, radio, moving pictures, television, teaching machines and programed learning, computer-assisted instruction,…

  6. No solid empirical evidence for the SOLID (serial order learning impairment) hypothesis of dyslexia.

    PubMed

    Staels, Eva; Van den Broeck, Wim

    2015-05-01

    This article reports on 2 studies that attempted to replicate the findings of a study by Szmalec, Loncke, Page, and Duyck (2011) on Hebb repetition learning in dyslexic individuals, from which these authors concluded that dyslexics suffer from a deficit in long-term learning of serial order information. In 2 experiments, 1 on adolescents (N = 59) and 1 on children (N = 57), no empirical evidence was obtained for impaired Hebb learning in dyslexics, whether the same data-analytical procedure as Szmalec et al. was used or whether some methodological improvements were applied (e.g., using a more sensitive index of Hebb learning, and equating groups on filler performance with state trace analysis). In an additional state trace analysis, aggregating data over participants, it was shown that performance on the repeated Hebb sequences was almost perfectly predictable from performance on the nonrepeated sequences (fillers). The implications of these findings are outlined for the current discussion on the mechanisms for encoding immediate serial recall and long-term sequence learning and for computational models attempting to simulate these mechanisms. PMID:25314161

  7. Information-Driven Active Audio-Visual Source Localization.

    PubMed

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application. PMID:26327619

  8. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  9. Disruption of Broca's Area Alters Higher-order Chunking Processing during Perceptual Sequence Learning.

    PubMed

    Alamia, Andrea; Solopchuk, Oleg; D'Ausilio, Alessandro; Van Bever, Violette; Fadiga, Luciano; Olivier, Etienne; Zénon, Alexandre

    2016-03-01

    Because Broca's area is known to be involved in many cognitive functions, including language, music, and action processing, several attempts have been made to propose a unifying theory of its role that emphasizes a possible contribution to syntactic processing. Recently, we have postulated that Broca's area might be involved in higher-order chunk processing during implicit learning of a motor sequence. Chunking is an information-processing mechanism that consists of grouping consecutive items in a sequence and is likely to be involved in all of the aforementioned cognitive processes. Demonstrating a contribution of Broca's area to chunking during the learning of a nonmotor sequence that does not involve language could shed new light on its function. To address this issue, we used offline MRI-guided TMS in healthy volunteers to disrupt the activity of either the posterior part of Broca's area (left Brodmann's area [BA] 44) or a control site just before participants learned a perceptual sequence structured in distinct hierarchical levels. We found that disruption of the left BA 44 increased the processing time of stimuli representing the boundaries of higher-order chunks and modified the chunking strategy. The current results highlight the possible role of the left BA 44 in building up effector-independent representations of higher-order events in structured sequences. This might clarify the contribution of Broca's area in processing hierarchical structures, a key mechanism in many cognitive functions, such as language and composite actions. PMID:26765778

  10. Simulated and Virtual Science Laboratory Experiments: Improving Critical Thinking and Higher-Order Learning Skills

    NASA Astrophysics Data System (ADS)

    Simon, Nicole A.

    Virtual laboratory experiments using interactive computer simulations are not being employed as viable alternatives to laboratory science curriculum at extensive enough rates within higher education. Rote traditional lab experiments are currently the norm and are not addressing inquiry, Critical Thinking, and cognition throughout the laboratory experience, linking with educational technologies (Pyatt & Sims, 2007; 2011; Trundle & Bell, 2010). A causal-comparative quantitative study was conducted with 150 learners enrolled at a two-year community college, to determine the effects of simulation laboratory experiments on Higher-Order Learning, Critical Thinking Skills, and Cognitive Load. The treatment population used simulated experiments, while the non-treatment sections performed traditional expository experiments. A comparison was made using the Revised Two-Factor Study Process survey, Motivated Strategies for Learning Questionnaire, and the Scientific Attitude Inventory survey, using a Repeated Measures ANOVA test for treatment or non-treatment. A main effect of simulated laboratory experiments was found for both Higher-Order Learning, [F (1, 148) = 30.32,p = 0.00, eta2 = 0.12] and Critical Thinking Skills, [F (1, 148) = 14.64,p = 0.00, eta 2 = 0.17] such that simulations showed greater increases than traditional experiments. Post-lab treatment group self-reports indicated increased marginal means (+4.86) in Higher-Order Learning and Critical Thinking Skills, compared to the non-treatment group (+4.71). Simulations also improved the scientific skills and mastery of basic scientific subject matter. It is recommended that additional research recognize that learners' Critical Thinking Skills change due to different instructional methodologies that occur throughout a semester.

  11. Neural correlates of audiovisual integration of semantic category information.

    PubMed

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-04-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded during a words-categorization task with stimuli presented in the auditory-visual modality. In the experiment, congruency of the visual and auditory stimuli was manipulated. Results showed that within the window of about 180-210 ms post-stimulus more positive values were elicited by category-congruent audiovisual stimuli than category-incongruent audiovisual stimuli. This indicates that the late frontal-central audiovisual interaction is related to audiovisual integration of semantic category information.

  12. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed Central

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  13. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-10-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  14. Word sense disambiguation via high order of learning in complex networks

    NASA Astrophysics Data System (ADS)

    Silva, Thiago C.; Amancio, Diego R.

    2012-06-01

    Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model.

  15. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-08-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  16. A Step Into Service Learning Is A Step Into Higher Order Thinking

    NASA Astrophysics Data System (ADS)

    O'Connell, S.

    2010-12-01

    Students, especially beginning college students often consider science courses to be remembering and regurgitating, not creative and of little social relevance. As scientists we know this isn’t true. How do we counteract this sentiment among students? Incorporating service learning, probably better-called project learning into our class is one way. As one “non-science” student, who was taking two science service-learning courses said, “If it’s a service-learning course you know it’s going to be interesting.” Service learning means that some learning takes place in the community. The community component increases understanding of the material being studied, promotes higher order thinking, and provides a benefit for someone else. Students have confirmed that the experience shows them that their knowledge is need by the community and for some, reinforces their commitment to continued civic engagement. I’ll give three examples with the community activity growing in importance in the course and in the community, a single exercise, a small project, and a focus of the class. All of the activities use reflective writing to increase analysis and synthesis. An example of a single exercise could be participating in an event related to your course, for example, a zoning board meeting, or a trip to a wastewater treatment plant. Preparation for the trip should include reading. After the event students synthesize and analyze the activity through a series of questions emphasizing reflection. A two to four class assignment might include expanding the single-day activity or students familiarizing themselves with a course topic, interviewing a person, preparing a podcast of the interview and reflecting upon the experience. The most comprehensive approach is where the class focus is on a community project, e.g. Tim Ku’s geochemistry course (this session). Another class that lends itself easily to a comprehensive service learning approach is Geographic Information

  17. Attributes of Quality in Audiovisual Materials for Health Professionals.

    ERIC Educational Resources Information Center

    Suter, Emanuel; Waddell, Wendy H.

    1981-01-01

    Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)

  18. Associative learning as higher order cognition: Learning in human and nonhuman animals from the perspective of propositional theories and relational frame theory.

    PubMed

    De Houwer, Jan; Hughes, Sean; Barnes-Holmes, Dermot

    2016-08-01

    We aim to provide a new perspective on the old debate about whether evidence for higher order cognition in nonhuman animals can be reinterpreted in terms of associative learning. Our starting point is the idea that associative learning is best thought of as an effect (i.e., the impact of paired events on behavior) rather than a specific mental process (e.g., the formation of associations). This idea allows us to consider (a) propositional theories according to which associative learning is mediated by higher order mental processes akin to problem solving and (b) relational frame theory that allows one to think of seemingly simple associative learning effects as instances of a complex phenomenon known as arbitrarily applicable relational responding. Based on these 2 theories, we argue that (a) higher order cognition and associative learning are not necessarily mutually exclusive and (b) a more sophisticated conceptualization of higher order cognition is warranted. (PsycINFO Database Record

  19. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention. PMID:25341648

  20. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. PMID:26740404

  1. Audio-visual speech perception: a developmental ERP investigation.

    PubMed

    Knowland, Victoria C P; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S C

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  2. Extraction of composite visual objects from audiovisual materials

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal

    1999-08-01

    An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.

  3. Learning bundles of stimuli renders stimulus order as a cue, not a confound.

    PubMed

    Qian, Ting; Aslin, Richard N

    2014-10-01

    The order in which stimuli are presented in an experiment has long been recognized to influence behavior. Previous accounts have often attributed the effect of stimulus order to the mechanisms with which people process information. We propose that stimulus order influences cognition because it is an important cue for learning the underlying structure of a task environment. In particular, stimulus order can be used to infer a "stimulus bundle"--a sequence of consecutive stimuli that share the same underlying latent cluster. We describe a clustering model that successfully explains the perception of streak shooting in basketball games, along with two other cognitive phenomena, as the outcome of finding the statistically optimal bundle representation. We argue that the perspective of viewing stimulus order as a cue may hold the key to explaining behaviors that seemingly deviate from normative theories of cognition and that in task domains where the assumption of stimulus bundles is intuitively appropriate, it can improve the explanatory power of existing models.

  4. Perceptual and Neuronal Boundary Learned from Higher-Order Stimulus Probabilities

    PubMed Central

    Köver, Hania; Gill, Kirt; Tseng, Yi-Ting L.; Bao, Shaowen

    2013-01-01

    During an early epoch of development, the brain is highly adaptive to the stimulus environment. Exposing young animals to a particular tone, for example, leads to an enlarged representation of that tone in primary auditory cortex. While the neural effects of simple tonal environments are well characterized, the principles that guide plasticity in more complex acoustic environments remain unclear. In addition, very little is known about the perceptual consequences of early experience-induced plasticity. To address these questions, we reared juvenile rats in complex multitone environments that differed in terms of the higher-order conditional probabilities between sounds. We found that the development of primary cortical acoustic representations, as well as frequency discrimination ability in adult animals, were shaped by the higher-order stimulus statistics of the early acoustic environment. Our results suggest that early experience-dependent cortical reorganization may mediate perceptual changes through statistical learning of the sensory input. PMID:23426696

  5. The impact of constructivist teaching strategies on the acquisition of higher order cognition and learning

    NASA Astrophysics Data System (ADS)

    Merrill, Alison Saricks

    The purpose of this quasi-experimental quantitative mixed design study was to compare the effectiveness of brain-based teaching strategies versus a traditional lecture format in the acquisition of higher order cognition as determined by test scores. A second purpose was to elicit student feedback about the two teaching approaches. The design was a 2 x 2 x 2 factorial design study with repeated measures on the last factor. The independent variables were type of student, teaching method, and a within group change over time. Dependent variables were a between group comparison of pre-test, post-test gain scores and a within and between group comparison of course examination scores. A convenience sample of students enrolled in medical-surgical nursing was used. One group (n=36) was made up of traditional students and the other group (n=36) consisted of second-degree students. Four learning units were included in this study. Pre- and post-tests were given on the first two units. Course examinations scores from all four units were compared. In one cohort two of the units were taught via lecture format and two using constructivist activities. These methods were reversed for the other cohort. The conceptual basis for this study derives from neuroscience and cognitive psychology. Learning is defined as the growth of new dendrites. Cognitive psychologists view learning as a constructive activity in which new knowledge is built on an internal foundation of existing knowledge. Constructivist teaching strategies are designed to stimulate the brain's natural learning ability. There was a statistically significant difference based on type of teaching strategy (t = -2.078, df = 270, p = .039, d = .25)) with higher mean scores on the examinations covering brain-based learning units. There was no statistical significance based on type of student. Qualitative data collection was conducted in an on-line forum at the end of the semester. Students had overall positive responses about the

  6. Seeing the unseen: Second-order correlation learning in 7- to 11-month-olds.

    PubMed

    Yermolayeva, Yevdokiya; Rakison, David H

    2016-07-01

    We present four experiments with the object-examining procedure that investigated 7-, 9-, and 11-month-olds' ability to associate two object features that were never presented simultaneously. In each experiment, infants were familiarized with a number of 3D objects that incorporated different correlations among the features of those objects and the body of the objects (e.g., Part A and Body 1, and Part B and Body 1). Infants were then tested with objects with a novel body that either possessed both of the parts that were independently correlated with one body during familiarization (e.g., Part A and B on Body 3) or that were attached to two different bodies during familiarization. The experiments demonstrate that infants as young as 7months of age are capable of this kind of second-order correlation learning. Furthermore, by at least 11months of age infants develop a representation for the object that incorporates both of the features they experienced during training. We suggest that the ability to learn second-order correlations represents a powerful but as yet largely unexplored process for generalization in the first years of life.

  7. Seeing the unseen: Second-order correlation learning in 7- to 11-month-olds.

    PubMed

    Yermolayeva, Yevdokiya; Rakison, David H

    2016-07-01

    We present four experiments with the object-examining procedure that investigated 7-, 9-, and 11-month-olds' ability to associate two object features that were never presented simultaneously. In each experiment, infants were familiarized with a number of 3D objects that incorporated different correlations among the features of those objects and the body of the objects (e.g., Part A and Body 1, and Part B and Body 1). Infants were then tested with objects with a novel body that either possessed both of the parts that were independently correlated with one body during familiarization (e.g., Part A and B on Body 3) or that were attached to two different bodies during familiarization. The experiments demonstrate that infants as young as 7months of age are capable of this kind of second-order correlation learning. Furthermore, by at least 11months of age infants develop a representation for the object that incorporates both of the features they experienced during training. We suggest that the ability to learn second-order correlations represents a powerful but as yet largely unexplored process for generalization in the first years of life. PMID:27038738

  8. An Investigation of Four Hypotheses Concerning the Order by Which 4-Year-Old Children Learn the Alphabet Letters

    ERIC Educational Resources Information Center

    Justice, Laura M.; Pence, Khara; Bowles, Ryan B.; Wiggins, Alice

    2006-01-01

    This study tested four complementary hypotheses to characterize intrinsic and extrinsic influences on the order with which preschool children learn the names of individual alphabet letters. The hypotheses included: (a) "own-name advantage," which states that children learn those letters earlier which occur in their own names, (b) the "letter-order…

  9. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  10. Audiovisual Integration in High Functioning Adults with Autism

    ERIC Educational Resources Information Center

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  11. Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli.

    PubMed

    Vroomen, Jean; Stekelenburg, Jeroen J

    2010-07-01

    The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal occurrence of this audiovisual event was made predictable by two moving disks that touched the rectangle. Local autoregressive average source estimation indicated that this audiovisual interaction may be related to integrative processing in auditory areas. When the moving disks did not precede the audiovisual stimulus--making the onset unpredictable--there was no N1 reduction. In Experiment 2, the predictability of the leading visual signal was manipulated by introducing a temporal asynchrony between the audiovisual event and the collision of moving disks. Audiovisual events occurred either at the moment, before (too "early"), or after (too "late") the disks collided on the rectangle. When asynchronies varied from trial to trial--rendering the moving disks unreliable temporal predictors of the audiovisual event--the N1 reduction was abolished. These results demonstrate that the N1 suppression is induced by visual information that both precedes and reliably predicts audiovisual onset, without a necessary link to human action-related neural mechanisms.

  12. Audiovisual Processing in Children with and without Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Mongillo, Elizabeth A.; Irwin, Julia R.; Whalen, D. H.; Klaiman, Cheryl; Carter, Alice S.; Schultz, Robert T.

    2008-01-01

    Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces…

  13. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of getting to…

  14. Neural Correlates of Audiovisual Integration of Semantic Category Information

    ERIC Educational Resources Information Center

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  15. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  16. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  17. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    ERIC Educational Resources Information Center

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  18. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  19. Directory of Head Start Audiovisual Professional Training Materials.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.

    The directory contains over 265 annotated listings of audiovisual professional training materials related to the education and care of preschool handicapped children. Noted in the introduction are sources of the contents, such as lists of audiovisual materials disseminated by a hearing/speech center, and instructions for use of the directory.…

  20. The Natural Statistics of Audiovisual Speech

    PubMed Central

    Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A.

    2009-01-01

    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver. PMID:19609344

  1. Audiovisual Media in Japan Today. The Ministry of Education's 1986 Survey on Audiovisual Media. AVE in Japan No. 26.

    ERIC Educational Resources Information Center

    Japan Audio-Visual Education Association, Tokyo.

    Based on the Ministry of Education, Science and Culture's 1986 survey of "Audiovisual Facilities in Schools and Social Education Institutions," this summary of the current status of the diffusion and utilization of audiovisual materials and equipment in Japan pays particular attention to public and private schools. Social education institutions…

  2. Infant perception of audio-visual speech synchrony in familiar and unfamiliar fluent speech.

    PubMed

    Pons, Ferran; Lewkowicz, David J

    2014-06-01

    We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.

  3. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    PubMed

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  4. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... audiovisuals. 3015.200 Section 3015.200 Agriculture Regulations of the Department of Agriculture (Continued... Miscellaneous § 3015.200 Acknowledgement of support on publications and audiovisuals. (a) Definitions. Appendix A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b)...

  5. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  6. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  7. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  8. Effects of Higher-Order Cognitive Strategy Training on Gist-Reasoning and Fact-Learning in Adolescents

    PubMed Central

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Hull, Elizabeth L.; Lyon, G. Reid

    2010-01-01

    Improving the reasoning skills of adolescents across the United States has become a major concern for educators and scientists who are dedicated to identifying evidence-based protocols to improve student outcome. This small sample randomized, control pilot study sought to determine the efficacy of higher-order cognitive training on gist-reasoning and fact-learning in an inner-city public middle school. The study compared gist-reasoning and fact-learning performances after training in a smaller sample when tested in Spanish, many of the students’ native language, versus English. The 54 eighth grade students who participated in this pilot study were enroled in an urban middle school, predominantly from lower socio-economic status families, and were primarily of minority descent. The students were randomized into one of three groups, one that learned cognitive strategies promoting abstraction of meaning, a group that learned rote memory strategies, or a control group to ascertain the impact of each program on gist-reasoning and fact-learning from text-based information. We found that the students who had cognitive strategy instruction that entailed abstraction of meaning significantly improved their gist-reasoning and fact-learning ability. The students who learned rote memory strategies significantly improved their fact-learning scores from a text but not gist-reasoning ability. The control group showed no significant change in either gist-reasoning or fact-learning ability. A trend toward significant improvement in overall reading scores for the group that learned to abstract meaning as well as a significant correlation between gist-reasoning ability and the critical thinking on a state-mandated standardized reading test was also found. There were no significant differences between English and Spanish performance of gist-reasoning and fact-learning. Our findings suggest that teaching higher-order cognitive strategies facilitates gist-reasoning ability and student

  9. Effects of higher-order cognitive strategy training on gist-reasoning and fact-learning in adolescents.

    PubMed

    Gamino, Jacquelyn F; Chapman, Sandra B; Hull, Elizabeth L; Lyon, G Reid

    2010-01-01

    Improving the reasoning skills of adolescents across the United States has become a major concern for educators and scientists who are dedicated to identifying evidence-based protocols to improve student outcome. This small sample randomized, control pilot study sought to determine the efficacy of higher-order cognitive training on gist-reasoning and fact-learning in an inner-city public middle school. The study compared gist-reasoning and fact-learning performances after training in a smaller sample when tested in Spanish, many of the students' native language, versus English. The 54 eighth grade students who participated in this pilot study were enroled in an urban middle school, predominantly from lower socio-economic status families, and were primarily of minority descent. The students were randomized into one of three groups, one that learned cognitive strategies promoting abstraction of meaning, a group that learned rote memory strategies, or a control group to ascertain the impact of each program on gist-reasoning and fact-learning from text-based information. We found that the students who had cognitive strategy instruction that entailed abstraction of meaning significantly improved their gist-reasoning and fact-learning ability. The students who learned rote memory strategies significantly improved their fact-learning scores from a text but not gist-reasoning ability. The control group showed no significant change in either gist-reasoning or fact-learning ability. A trend toward significant improvement in overall reading scores for the group that learned to abstract meaning as well as a significant correlation between gist-reasoning ability and the critical thinking on a state-mandated standardized reading test was also found. There were no significant differences between English and Spanish performance of gist-reasoning and fact-learning. Our findings suggest that teaching higher-order cognitive strategies facilitates gist-reasoning ability and student

  10. Dissociating verbal and nonverbal audiovisual object processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  11. Categorization of Natural Dynamic Audiovisual Scenes

    PubMed Central

    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville

    2014-01-01

    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database. PMID:24788808

  12. Community Organizational Learning: Case Studies Illustrating a Three-Dimensional Model of Levels and Orders of Change

    ERIC Educational Resources Information Center

    Perkins, Douglas D.; Bess, Kimberly D.; Cooper, Daniel G.; Jones, Diana L.; Armstead, Theresa; Speer, Paul W.

    2007-01-01

    We present a three-dimensional cube framework to help community organizational researchers and administrators think about an organization's learning and empowerment-related structures and processes in terms of first-order (incremental or ameliorative) and second-order (transformative) change at the individual, organizational, and community levels.…

  13. GRAPE - GIS Repetition Using Audio-Visual Repetition Units and its Leanring Effectiveness

    NASA Astrophysics Data System (ADS)

    Niederhuber, M.; Brugger, S.

    2011-09-01

    A new audio-visual learning medium has been developed at the Department of Environmental Sciences at ETH Zurich (Switzerland), for use in geographical information sciences (GIS) courses. This new medium, presented in the form of Repetition Units, allows students to review and consolidate the most important learning concepts on an individual basis. The new material consists of: a) a short enhanced podcast (recorded and spoken slide show) with a maximum duration of 5 minutes, which focuses on only one important aspect of a lecture's theme; b) one or two relevant exercises, covering different cognitive levels of learning, with a maximum duration of 10 minutes; and c), solutions for the exercises. During a pilot phase in 2010, six Repetition Units were produced by the lecturers. Twenty more Repetition Units will be produced by our students during the fall semester of 2011 and 2012. The project is accompanied by a 5-year study (2009 - 2013) that investigates learning success using the new material, focussing on the question, whether or not the new material help to consolidate and refresh basic GIS knowledge. It will be analysed based on longitudinal studies. Initial results indicate that the new medium helps to refresh knowledge as the test groups scored higher than the control group. These results are encouraging and suggest that the new material with its combination of short audio-visual podcasts and relevant exercises help to consolidate students' knowledge.

  14. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  15. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group. PMID:25324091

  16. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    PubMed Central

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  17. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.

  18. Virtual Laboratories to Achieve Higher-Order Learning in Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Ward, A. S.; Gooseff, M. N.; Toto, R.

    2009-12-01

    Bloom’s higher-order cognitive skills (analysis, evaluation, and synthesis) are recognized as necessary in engineering education, yet these are difficult to achieve in traditional lecture formats. Laboratory components supplement traditional lectures in an effort to emphasize active learning and provide higher-order challenges, but these laboratories are often subject to the constraints of (a) increasing student enrollment, (b) limited funding for operational, maintenance, and instructional expenses and (c) increasing demands on undergraduate student credit requirements. Here, we present results from a pilot project implementing virtual (or online) laboratory experiences as an alternative to a traditional laboratory experience in Fluid Mechanics, a required third year course. Students and faculty were surveyed to identify the topics that were most difficult, and virtual laboratory and design components developed to supplement lecture material. Each laboratory includes a traditional lab component, requiring student analysis and evaluation. The lab concludes with a design exercise, which imposes additional problem constraints and allows students to apply their laboratory observations to a real-world situation.

  19. Audiovisual Interval Size Estimation Is Associated with Early Musical Training

    PubMed Central

    Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134

  20. The Effect of Number and Presentation Order of High-Constraint Sentences on Second Language Word Learning

    PubMed Central

    Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo

    2016-01-01

    This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input. PMID:27695432

  1. The Effect of Number and Presentation Order of High-Constraint Sentences on Second Language Word Learning

    PubMed Central

    Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo

    2016-01-01

    This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input.

  2. A Comparative Study of the Effects of Two and Three Dimensional Representation in an Audiovisual Instructional Material on Performance of a Psychomotor Task Involving Depth Perception.

    ERIC Educational Resources Information Center

    Berthelot, Serge; Stolovitch, Harold D.

    1980-01-01

    Focuses on two questions: Does the use of three-dimensional representation in audiovisual instructional materials increase learning of a psychomotor task in which depth perception is critical as compared with two-dimensional representations? Is there a relationship between the impact of three-dimensional representation and the level of complexity…

  3. The Role of Visual Learning in Improving Students' High-Order Thinking Skills

    ERIC Educational Resources Information Center

    Raiyn, Jamal

    2016-01-01

    Various concepts have been introduced to improve students' analytical thinking skills based on problem based learning (PBL). This paper introduces a new concept to increase student's analytical thinking skills based on a visual learning strategy. Such a strategy has three fundamental components: a teacher, a student, and a learning process. The…

  4. Automatic audiovisual integration in speech perception.

    PubMed

    Gentilucci, Maurizio; Cattaneo, Luigi

    2005-11-01

    Two experiments aimed to determine whether features of both the visual and acoustical inputs are always merged into the perceived representation of speech and whether this audiovisual integration is based on either cross-modal binding functions or on imitation. In a McGurk paradigm, observers were required to repeat aloud a string of phonemes uttered by an actor (acoustical presentation of phonemic string) whose mouth, in contrast, mimicked pronunciation of a different string (visual presentation). In a control experiment participants read the same printed strings of letters. This condition aimed to analyze the pattern of voice and the lip kinematics controlling for imitation. In the control experiment and in the congruent audiovisual presentation, i.e. when the articulation mouth gestures were congruent with the emission of the string of phones, the voice spectrum and the lip kinematics varied according to the pronounced strings of phonemes. In the McGurk paradigm the participants were unaware of the incongruence between visual and acoustical stimuli. The acoustical analysis of the participants' spoken responses showed three distinct patterns: the fusion of the two stimuli (the McGurk effect), repetition of the acoustically presented string of phonemes, and, less frequently, of the string of phonemes corresponding to the mouth gestures mimicked by the actor. However, the analysis of the latter two responses showed that the formant 2 of the participants' voice spectra always differed from the value recorded in the congruent audiovisual presentation. It approached the value of the formant 2 of the string of phonemes presented in the other modality, which was apparently ignored. The lip kinematics of the participants repeating the string of phonemes acoustically presented were influenced by the observation of the lip movements mimicked by the actor, but only when pronouncing a labial consonant. The data are discussed in favor of the hypothesis that features of both

  5. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  6. Audiovisual associations alter the perception of low-level visual motion.

    PubMed

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  7. Audiovisual associations alter the perception of low-level visual motion.

    PubMed

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  8. Teleconferences and Audiovisual Materials in Earth Science Education

    NASA Astrophysics Data System (ADS)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  9. Sight and sound out of synch: fragmentation and renormalisation of audiovisual integration and subjective timing.

    PubMed

    Freeman, Elliot D; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon

    2013-01-01

    The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream-Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing

  10. Sight and sound out of synch: fragmentation and renormalisation of audiovisual integration and subjective timing.

    PubMed

    Freeman, Elliot D; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon

    2013-01-01

    The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream-Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing

  11. Audiovisual signal compression: the 64/P codecs

    NASA Astrophysics Data System (ADS)

    Jayant, Nikil S.

    1996-02-01

    Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality

  12. An audiovisual database of English speech sounds

    NASA Astrophysics Data System (ADS)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  13. Behavioral Science Design for Audio-Visual Software Development

    ERIC Educational Resources Information Center

    Foster, Dennis L.

    1974-01-01

    A discussion of the basic structure of the behavioral audio-visual production which consists of objectives analysis, approach determination, technical production, fulfillment evaluation, program refinement, implementation, and follow-up. (Author)

  14. A measure for assessing the effects of audiovisual speech integration.

    PubMed

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  15. Audiovisual Materials and Programming for Children: A Long Tradition.

    ERIC Educational Resources Information Center

    Doll, Carol A.

    1992-01-01

    Explores the use of audiovisual materials in children's programing at the Seattle Public Library prior to 1920. Kinds of materials discussed include pictures, reflectoscopes, films, sound recordings, lantern slides, and stereographs. (17 references) (MES)

  16. Proper Use of Audio-Visual Aids: Essential for Educators.

    ERIC Educational Resources Information Center

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  17. Quantifying temporal ventriloquism in audiovisual synchrony perception.

    PubMed

    Kuling, Irene A; Kohlrausch, Armin; Juola, James F

    2013-10-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from rhythm perception. In this method, participants had to align the temporal position of a target in a rhythmic sequence of four markers. In the first experiment, target and markers consisted of a visual flash or an auditory noise burst, and all four combinations of target and marker modalities were tested. In the same-modality conditions, no temporal biases and a high precision of the adjusted temporal position of the target were observed. In the different-modality conditions, we found a systematic temporal bias of 25-30 ms. In the second part of the first and in a second experiment, we tested conditions in which audiovisual markers with different stimulus onset asynchronies (SOAs) between the two components and a visual target were used to quantify temporal ventriloquism. The adjusted target positions varied by up to about 50 ms and depended in a systematic way on the SOA and its proximity to the point of subjective synchrony. These data allowed testing different quantitative models. The most satisfying model, based on work by Maij, Brenner, and Smeets (Journal of Neurophysiology 102, 490-495, 2009), linked temporal ventriloquism and the percept of synchrony and was capable of adequately describing the results from the present study, as well as those of some earlier experiments. PMID:23868564

  18. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  19. Audio-visual assistance in co-creating transition knowledge

    NASA Astrophysics Data System (ADS)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  20. Engineering the path to higher-order thinking in elementary education: A problem-based learning approach for STEM integration

    NASA Astrophysics Data System (ADS)

    Rehmat, Abeera Parvaiz

    As we progress into the 21st century, higher-order thinking skills and achievement in science and math are essential to meet the educational requirement of STEM careers. Educators need to think of innovative ways to engage and prepare students for current and future challenges while cultivating an interest among students in STEM disciplines. An instructional pedagogy that can capture students' attention, support interdisciplinary STEM practices, and foster higher-order thinking skills is problem-based learning. Problem-based learning embedded in the social constructivist view of teaching and learning (Savery & Duffy, 1995) promotes self-regulated learning that is enhanced through exploration, cooperative social activity, and discourse (Fosnot, 1996). This quasi-experimental mixed methods study was conducted with 98 fourth grade students. The study utilized STEM content assessments, a standardized critical thinking test, STEM attitude survey, PBL questionnaire, and field notes from classroom observations to investigate the impact of problem-based learning on students' content knowledge, critical thinking, and their attitude towards STEM. Subsequently, it explored students' experiences of STEM integration in a PBL environment. The quantitative results revealed a significant difference between groups in regards to their content knowledge, critical thinking skills, and STEM attitude. From the qualitative results, three themes emerged: learning approaches, increased interaction, and design and engineering implementation. From the overall data set, students described the PBL environment to be highly interactive that prompted them to employ multiple approaches, including design and engineering to solve the problem.

  1. Frontostriatal and mediotemporal lobe contributions to implicit higher-order spatial sequence learning declines in aging and Parkinson's disease.

    PubMed

    Schendan, Haline E; Tinaz, Sule; Maher, Stephen M; Stern, Chantal E

    2013-04-01

    Sequence learning depends on the striatal system, but recent findings also implicate the mediotemporal lobe (MTL) system. Schendan, Searl, Melrose, and Stern (2003) found higher-order associative, learning-related activation in the striatum, dorsolateral prefrontal cortex, and the MTL during the early acquisition phase of both implicit and explicit variants of a serial response time task. This functional MRI (fMRI) study capitalized on this task to determine how changes in MTL function observed in aging and compromised frontostriatal function characteristic of patients with Parkinson's disease (PD) impacts sequence learning and memory under implicit instructions. Brain activity was compared between "sequence" and "random" conditions in 12 nondemented patients with PD and education- and gender-matched healthy control participants of whom 12 were age matched (MC) and 14 were younger (YC). Behaviorally, sequence-specific learning of higher-order associations was reduced with aging and changed further with PD and resulted primarily in implicit knowledge in the older participants. fMRI revealed reduced intensity and extent of sequence learning-related activation in older relative to younger people in frontostriatal circuits and the MTL. This was because signal was greater for the sequence than random condition in younger people, whereas older people, especially those with PD, showed the opposite pattern. Both older groups also showed increased activation to the task itself relative to baseline fixation. In addition, right MTL showed hypoactivation and left MTL hyperactivation in PD relative to the MC group. The results suggest changes in frontostriatal and MTL activity occur during aging that affect task-related activity and the initial acquisition phase of implicit higher-order sequence learning. In addition, the results suggest that Parkinson's disease adversely affects processes in the MTL including sequence learning and memory.

  2. The Impact of Learning Driven Constructs on the Perceived Higher Order Cognitive Skills Improvement: Multimedia vs. Text

    ERIC Educational Resources Information Center

    Bagarukayo, Emily; Weide, Theo; Mbarika, Victor; Kim, Min

    2012-01-01

    The study aims at determining the impact of learning driven constructs on Perceived Higher Order Cognitive Skills (HOCS) improvement when using multimedia and text materials. Perceived HOCS improvement is the attainment of HOCS based on the students' perceptions. The research experiment undertaken using a case study was conducted on 223 students…

  3. Focusing Upon Higher Order Thinking Skills: WebQuests and the Learner-Centered Mathematical Learning Environment.

    ERIC Educational Resources Information Center

    Crawford, Caroline M.; Brown, Evelyn

    Emphasizing higher order thinking skills is of primary concern within mathematics. Learning environments emphasizing the learners understanding of the subject matter are constantly under review, with one learner-centered emphasis upon instruction leading the way: WebQuests. The design and development of WebQuests for the mathematical classroom…

  4. Website Analysis as a Tool for Task-Based Language Learning and Higher Order Thinking in an EFL Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo

    2014-01-01

    Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…

  5. Using Activity Theory to Design Constructivist Online Learning Environments for Higher Order Thinking: A Retrospective Analysis

    ERIC Educational Resources Information Center

    Morrison, Dirk

    2003-01-01

    This paper examined a particular online learning activity, embedded within a computer supported collaborative learning (CSCL) environment incorporated as part of the larger context of participation in a unique national agricultural leadership development program. Process outcomes such as a high level of collaboration and active peer facilitation…

  6. Ordering Subjects: Actor-Networks and Intellectual Technologies in Lifelong Learning.

    ERIC Educational Resources Information Center

    Edwards, Richard

    2003-01-01

    Argues that discourses of lifelong learning act as intellectual technologies that construct individuals as subjects in a learning society. Discuses three discourses using actor-network theory: (1) economics/human capital (individuals as accumulators of skills for competitiveness); (2) humanistic psychology (individuals seeking fulfilment through…

  7. Granularity and the Acquisition of Grammatical Gender: How Order-of-Acquisition Affects What Gets Learned

    ERIC Educational Resources Information Center

    Arnon, Inbal; Ramscar, Michael

    2012-01-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost…

  8. PBL-GIS in Secondary Geography Education: Does It Result in Higher-Order Learning Outcomes?

    ERIC Educational Resources Information Center

    Liu, Yan; Bui, Elisabeth N.; Chang, Chew-Hung; Lossman, Hans G.

    2010-01-01

    This article presents research on evaluating problem-based learning using GIS technology in a Singapore secondary school. A quasi-experimental research design was carried to test the PBL pedagogy (PBL-GIS) with an experimental group of students and compare their learning outcomes with a control group who were exposed to PBL but not GIS. The…

  9. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: a qualitative study

    PubMed Central

    2013-01-01

    Background Computerized Provider Order Entry (CPOE) can improve patient safety, quality and efficiency, but hospitals face a host of barriers to adopting CPOE, ranging from resistance among physicians to the cost of the systems. In response to the incentives for meaningful use of health information technology and other market forces, hospitals in the United States are increasingly moving toward the adoption of CPOE. The purpose of this study was to characterize the experiences of hospitals that have successfully implemented CPOE. Methods We used a qualitative approach to observe clinical activities and capture the experiences of physicians, nurses, pharmacists and administrators at five community hospitals in Massachusetts (USA) that adopted CPOE in the past few years. We conducted formal, structured observations of care processes in diverse inpatient settings within each of the hospitals and completed in-depth, semi-structured interviews with clinicians and staff by telephone. After transcribing the audiorecorded interviews, we analyzed the content of the transcripts iteratively, guided by principles of the Immersion and Crystallization analytic approach. Our objective was to identify attitudes, behaviors and experiences that would constitute useful lessons for other hospitals embarking on CPOE implementation. Results Analysis of observations and interviews resulted in findings about the CPOE implementation process in five domains: governance, preparation, support, perceptions and consequences. Successful institutions implemented clear organizational decision-making mechanisms that involved clinicians (governance). They anticipated the need for education and training of a wide range of users (preparation). These hospitals deployed ample human resources for live, in-person training and support during implementation. Successful implementation hinged on the ability of clinical leaders to address and manage perceptions and the fear of change. Implementation proceeded

  10. Effects of the audiovisual conflict on auditory early processes.

    PubMed

    Scannella, Sébastien; Causse, Mickaël; Chauveau, Nicolas; Pastor, Josette; Dehais, Frédéric

    2013-07-01

    Auditory alarm misperception is one of the critical events that lead aircraft pilots to an erroneous flying decision. The rarity of these alarms associated with their possible unreliability may play a role in this misperception. In order to investigate this hypothesis, we manipulated both audiovisual conflict and sound rarity in a simplified landing task. Behavioral data and event related potentials (ERPs) of thirteen healthy participants were analyzed. We found that the presentation of a rare auditory signal (i.e., an alarm), incongruent with visual information, led to a smaller amplitude of the auditory N100 (i.e., less negative) compared to the condition in which both signals were congruent. Moreover, the incongruity between the visual information and the rare sound did not significantly affect reaction times, suggesting that the rare sound was neglected. We propose that the lower N100 amplitude reflects an early visual-to-auditory gating that depends on the rarity of the sound. In complex aircraft environments, this early effect might be partly responsible for auditory alarm insensitivity. Our results provide a new basis for future aeronautic studies and the development of countermeasures.

  11. Promoting Higher Order Thinking Skills via IPTEACES e-Learning Framework in the Learning of Information Systems Units

    ERIC Educational Resources Information Center

    Isaias, Pedro; Issa, Tomayess; Pena, Nuno

    2014-01-01

    When developing and working with various types of devices from a supercomputer to an iPod Mini, it is essential to consider the issues of Human Computer Interaction (HCI) and Usability. Developers and designers must incorporate HCI, Usability and user satisfaction in their design plans to ensure that systems are easy to learn, effective,…

  12. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.

  13. Our nation's wetlands (video). Audio-Visual

    SciTech Connect

    Not Available

    1990-01-01

    The Department of the Interior is custodian of approximately 500 million acres of federally owned land and has an important role to play in the management of wetlands. To contribute to the President's goal of no net loss of America's remaining wetlands, the Department of the Interior has initiated a 3-point program consisting of wetlands protection, restoration, and research: Wetlands Protection--Reduce wetlands losses on federally owned lands and encourage state and private landholders to practice wetlands conservation; Wetlands Restoration--Increase wetlands gains through the restoration and creation of wetlands on both public and private lands; Wetlands Research--Provide a foundation of scientific knowledge to guide future actions and decisions about wetlands. The audiovisual is a slide/tape-to-video transfer illustrating the various ways Interior bureaus are working to preserve our Nation's wetlands. The tape features an introduction by Secretary Manuel Lujan on the importance of wetlands and recognizing the benefit of such programs as the North American Waterfowl Management Program.

  14. The AORN Audiovisual Committee. Thirty-three years of perioperative nursing education.

    PubMed

    Riall, C T

    1993-11-01

    The productions of the AORN Audiovisual Committee are best measured by their popularity. Enthusiastic audiences at Congress premieres and demands for bookings by chapters and individual instructors represent thousands of viewers for each film. A complete list of all films produced by the Committee is available in the archives, located in the AORN library at Headquarters. Readers may obtain a complete list of all perioperative nursing film topics produced by this program in the past 33 years from the author. The out-of-pocket costs for producing these films and videotapes during the past 33 years exceeds $33 million. Of even greater significance is the great number of hours devoted by the Committee members. Serving on the Audiovisual Committee is similar to taking an advanced college course. Selected for their specialized knowledge, members need to learn and apply the techniques of the cinema. Far more than just showing a surgical procedure, teaching films require the staging of the ideal method while explaining its superiority over the method it replaces. Films must be forceful enough to overcome the common attitude, "This is the way we've always done it." In the course of writing the scripts, which are critiqued by the Committee, authors must research the subjects and review the state of the art. They are challenged to improve on the past and remain current on changing techniques. Service on the Audiovisual Committee prepares members for further involvement in AORN activities. Twenty members of the Committee have been elected to the Board of Directors, and seven of those have served as AORN Presidents.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Audio-Visual Detection Benefits in the Rat

    PubMed Central

    Gleiss, Stephanie; Kayser, Christoph

    2012-01-01

    Human psychophysical studies have described multisensory perceptual benefits such as enhanced detection rates and faster reaction times in great detail. However, the neural circuits and mechanism underlying multisensory integration remain difficult to study in the primate brain. While rodents offer the advantage of a range of experimental methodologies to study the neural basis of multisensory processing, rodent studies are still limited due to the small number of available multisensory protocols. We here demonstrate the feasibility of an audio-visual stimulus detection task for rats, in which the animals detect lateralized uni- and multi-sensory stimuli in a two-response forced choice paradigm. We show that animals reliably learn and perform this task. Reaction times were significantly faster and behavioral performance levels higher in multisensory compared to unisensory conditions. This benefit was strongest for dim visual targets, in agreement with classical patterns of multisensory integration, and was specific to task-informative sounds, while uninformative sounds speeded reaction times with little costs for detection performance. Importantly, multisensory benefits for stimulus detection and reaction times appeared at different levels of task proficiency and training experience, suggesting distinct mechanisms inducing these two multisensory benefits. Our results demonstrate behavioral multisensory enhancement in rats in analogy to behavioral patterns known from other species, such as humans. In addition, our paradigm enriches the set of behavioral tasks on which future studies can rely, for example to combine behavioral measurements with imaging or pharmacological studies in the behaving animal or to study changes of integration properties in disease models. PMID:23029179

  16. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  17. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications... published with grant support and, if feasible, on any publication reporting the results of, or describing, a... under subgrants. (2) Audiovisuals produced as research instruments or for documenting experimentation...

  18. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Institution of Investigation... importation, and the sale within the United States after importation of certain audiovisual components and... certain audiovisual components and products containing the same that infringe one or more of claims 1,...

  19. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and...

  20. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  1. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  2. Rhythmic synchronization tapping to an audio-visual metronome in budgerigars.

    PubMed

    Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa

    2011-01-01

    In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.

  3. Constructing higher-order miRNA-mRNA interaction networks in prostate cancer via hypergraph-based learning

    PubMed Central

    2013-01-01

    Background Dysregulation of genetic factors such as microRNAs (miRNAs) and mRNAs has been widely shown to be associated with cancer progression and development. In particular, miRNAs and mRNAs cooperate to affect biological processes, including tumorigenesis. The complexity of miRNA-mRNA interactions presents a major barrier to identifying their co-regulatory roles and functional effects. Thus, by computationally modeling these complex relationships, it may be possible to infer the gene interaction networks underlying complicated biological processes. Results We propose a data-driven, hypergraph structural method for constructing higher-order miRNA-mRNA interaction networks from cancer genomic profiles. The proposed model explicitly characterizes higher-order relationships among genetic factors, from which cooperative gene activities in biological processes may be identified. The proposed model is learned by iteration of structure and parameter learning. The structure learning efficiently constructs a hypergraph structure by generating putative hyperedges representing complex miRNA-mRNA modules. It adopts an evolutionary method based on information-theoretic criteria. In the parameter learning phase, the constructed hypergraph is refined by updating the hyperedge weights using the gradient descent method. From the model, we produce biologically relevant higher-order interaction networks showing the properties of primary and metastatic prostate cancer, as candidates of potential miRNA-mRNA regulatory circuits. Conclusions Our approach focuses on potential cancer-specific interactions reflecting higher-order relationships between miRNAs and mRNAs from expression profiles. The constructed miRNA-mRNA interaction networks show oncogenic or tumor suppression characteristics, which are known to be directly associated with prostate cancer progression. Therefore, the hypergraph-based model can assist hypothesis formulation for the molecular pathogenesis of cancer. PMID

  4. Neural correlates of audiovisual speech processing in a second language.

    PubMed

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  5. Rhesus Monkeys (Macaca Mulatta) Maintain Learning Set Despite Second-Order Stimulus-Response Spatial Discontiguity

    ERIC Educational Resources Information Center

    Beran, Michael J.; Washburn, David A.; Rumbaugh, Duane M.

    2007-01-01

    In many discrimination-learning tests, spatial separation between stimuli and response loci disrupts performance in rhesus macaques. However, monkeys are unaffected by such stimulus-response spatial discontiguity when responses occur through joystick-based computerized movement of a cursor. To examine this discrepancy, five monkeys were tested on…

  6. Authentic Role-Playing as Situated Learning: Reframing Teacher Education Methodology for Higher-Order Thinking

    ERIC Educational Resources Information Center

    Leaman, Lori Hostetler; Flanagan, Toni Michele

    2013-01-01

    This article draws from situated learning theory, teacher education research, and the authors' collaborative self-study to propose a teacher education pedagogy that may help to bridge the theory-into-practice gap for preservice teachers. First, we review the Interstate Teacher Assessment and Support Consortium standards to confirm the call for…

  7. Complimentary lower-level and higher-order systems underpin imitation learning.

    PubMed

    Andrew, Matthew; Bennett, Simon J; Elliott, Digby; Hayes, Spencer J

    2016-04-01

    We examined whether the temporal representation developed during motor training with reduced-frequency knowledge of results (KR; feedback available on every other trial) was transferred to an imitation learning task. To this end, four groups first practised a three-segment motor sequence task with different KR protocols. Two experimental groups received reduced-frequency KR, one group received high-frequency KR (feedback available on every trial), and one received no-KR. Compared to the no-KR group, the groups that received KR learned the temporal goal of the movement sequence, as evidenced by increased accuracy and consistency across training. Next, all groups learned a single-segment movement that had the same temporal goal as the motor sequence task but required the imitation of biological and nonbiological motion kinematics. Kinematic data showed that whilst all groups imitated biological motion kinematics, the two experimental reduced-frequency KR groups were on average ∼ 800 ms more accurate at imitating movement time than the high-frequency KR and no-KR groups. The interplay between learning biological motion kinematics and the transfer of temporal representation indicates imitation involves distinct, but complementary lower-level sensorimotor and higher-level cognitive processing systems.

  8. The Black Record: A Selective Discography of Afro-Americana on Audio Discs Held by the Audio/Visual Department, John M. Olin Library.

    ERIC Educational Resources Information Center

    Dain, Bernice, Comp.; Nevin, David, Comp.

    The present revised and expanded edition of this document is an inclusive cumulation. A few items have been included which are on order as new to the collection or as replacements. This discography is intended to serve primarily as a local user's guide. The call number preceding each entry is based on the Audio-Visual Department's own, unique…

  9. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  10. Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing

    PubMed Central

    Hwang, Jaewon

    2015-01-01

    During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614

  11. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2001-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  12. Assessment of Higher Order Thinking Skills. Current Perspectives on Cognition, Learning and Instruction

    ERIC Educational Resources Information Center

    Schraw, Gregory, Ed.; Robinson, Daniel H., Ed.

    2011-01-01

    This volume examines the assessment of higher order thinking skills from the perspectives of applied cognitive psychology and measurement theory. The volume considers a variety of higher order thinking skills, including problem solving, critical thinking, argumentation, decision making, creativity, metacognition, and self-regulation. Fourteen…

  13. Authentic Instruction for 21st Century Learning: Higher Order Thinking in an Inclusive School

    ERIC Educational Resources Information Center

    Preus, Betty

    2012-01-01

    The author studied a public junior high school identified as successfully implementing authentic instruction. Such instruction emphasizes higher order thinking, deep knowledge, substantive conversation, and value beyond school. To determine in what ways higher order thinking was fostered both for students with and without disabilities, the author…

  14. Higher Order Thinking Skills among Secondary School Students in Science Learning

    ERIC Educational Resources Information Center

    Saido, Gulistan Mohammed; Siraj, Saedah; Bin Nordin, Abu Bakar; Al Amedy, Omed Saadallah

    2015-01-01

    A central goal of science education is to help students to develop their higher order thinking skills to enable them to face the challenges of daily life. Enhancing students' higher order thinking skills is the main goal of the Kurdish Science Curriculum in the Iraqi-Kurdistan region. This study aimed at assessing 7th grade students' higher order…

  15. Educating for Identity & Resistance: Situated Learning among the Old Order Mennonites.

    ERIC Educational Resources Information Center

    Cowles, Spencer L.

    An essential aspect of Old Order Mennonite identity is located in the historical-cultural understanding of who they are as one group of God's people. Schooling is an intentional means of reinforcing this understanding, and it is finely tuned to prepare children for the Old Order way of life. As such, it emphasizes basic academics, acquisition of…

  16. Audiovisual non-verbal dynamic faces elicit converging fMRI and ERP responses.

    PubMed

    Brefczynski-Lewis, Julie; Lowitszch, Svenja; Parsons, Michael; Lemieux, Susan; Puce, Aina

    2009-05-01

    In an everyday social interaction we automatically integrate another's facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input-a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion.

  17. Audiovisual non-verbal dynamic faces elicit converging fMRI and ERP responses.

    PubMed

    Brefczynski-Lewis, Julie; Lowitszch, Svenja; Parsons, Michael; Lemieux, Susan; Puce, Aina

    2009-05-01

    In an everyday social interaction we automatically integrate another's facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input-a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion. PMID:19384602

  18. The Max-Min High-Order Dynamic Bayesian Network for Learning Gene Regulatory Networks with Time-Delayed Regulations.

    PubMed

    Li, Yifeng; Chen, Haifen; Zheng, Jie; Ngom, Alioune

    2016-01-01

    Accurately reconstructing gene regulatory network (GRN) from gene expression data is a challenging task in systems biology. Although some progresses have been made, the performance of GRN reconstruction still has much room for improvement. Because many regulatory events are asynchronous, learning gene interactions with multiple time delays is an effective way to improve the accuracy of GRN reconstruction. Here, we propose a new approach, called Max-Min high-order dynamic Bayesian network (MMHO-DBN) by extending the Max-Min hill-climbing Bayesian network technique originally devised for learning a Bayesian network's structure from static data. Our MMHO-DBN can explicitly model the time lags between regulators and targets in an efficient manner. It first uses constraint-based ideas to limit the space of potential structures, and then applies search-and-score ideas to search for an optimal HO-DBN structure. The performance of MMHO-DBN to GRN reconstruction was evaluated using both synthetic and real gene expression time-series data. Results show that MMHO-DBN is more accurate than current time-delayed GRN learning methods, and has an intermediate computing performance. Furthermore, it is able to learn long time-delayed relationships between genes. We applied sensitivity analysis on our model to study the performance variation along different parameter settings. The result provides hints on the setting of parameters of MMHO-DBN.

  19. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  20. Cogging effect minimization in PMSM position servo system using dual high-order periodic adaptive learning compensation.

    PubMed

    Luo, Ying; Chen, Yangquan; Pi, Youguo

    2010-10-01

    Cogging effect which can be treated as a type of position-dependent periodic disturbance, is a serious disadvantage of the permanent magnetic synchronous motor (PMSM). In this paper, based on a simulation system model of PMSM position servo control, the cogging force, viscous friction, and applied load in the real PMSM control system are considered and presented. A dual high-order periodic adaptive learning compensation (DHO-PALC) method is proposed to minimize the cogging effect on the PMSM position and velocity servo system. In this DHO-PALC scheme, more than one previous periods stored information of both the composite tracking error and the estimate of the cogging force is used for the control law updating. Asymptotical stability proof with the proposed DHO-PALC scheme is presented. Simulation is implemented on the PMSM servo system model to illustrate the proposed method. When the constant speed reference is applied, the DHO-PALC can achieve a faster learning convergence speed than the first-order periodic adaptive learning compensation (FO-PALC). Moreover, when the designed reference signal changes periodically, the proposed DHO-PALC can obtain not only faster convergence speed, but also much smaller final error bound than the FO-PALC.

  1. A model-based comparison of three theories of audiovisual temporal recalibration.

    PubMed

    Yarrow, Kielan; Minaei, Shora; Arnold, Derek H

    2015-12-01

    Observers change their audio-visual timing judgements after exposure to asynchronous audiovisual signals. The mechanism underlying this temporal recalibration is currently debated. Three broad explanations have been suggested. According to the first, the time it takes for sensory signals to propagate through the brain has changed. The second explanation suggests that decisional criteria used to interpret signal timing have changed, but not time perception itself. A final possibility is that a population of neurones collectively encode relative times, and that exposure to a repeated timing relationship alters the balance of responses in this population. Here, we simplified each of these explanations to its core features in order to produce three corresponding six-parameter models, which generate contrasting patterns of predictions about how simultaneity judgements should vary across four adaptation conditions: No adaptation, synchronous adaptation, and auditory leading/lagging adaptation. We tested model predictions by fitting data from all four conditions simultaneously, in order to assess which model/explanation best described the complete pattern of results. The latency-shift and criterion-change models were better able to explain results for our sample as a whole. The population-code model did, however, account for improved performance following adaptation to a synchronous adapter, and best described the results of a subset of observers who reported least instances of synchrony. PMID:26545105

  2. Modulation of neural activity during observational learning of actions and their sequential orders.

    PubMed

    Frey, Scott H; Gerry, Valerie E

    2006-12-20

    How does the brain transform perceptual representations of others' actions into motor representations that can be used to guide behavior? Here we used functional magnetic resonance imaging to record human brain activity while subjects watched others construct multipart objects under varied task demands. We find that relative to resting baseline, passive action observation increases activity within inferior frontal and parietal cortices implicated in action encoding (mirror system) and throughout a distributed network of areas involved in motor representation, including dorsal premotor cortex, pre-supplementary motor area, cerebellum, and basal ganglia (experiments 1 and 2). Relative to passive observation, these same areas show increased activity when subjects observe with the intention to subsequently reproduce component actions using the demonstrated sequential procedures (experiment 1). Observing the same actions with the intention of reproducing component actions, but without the requirement to use the demonstrated sequential procedure, increases activity in the same regions, although to a lesser degree (experiment 2). These findings demonstrate that when attempting to learn behaviors through observation, the observers' intentions modulate responses in a widely distributed network of cortical and subcortical regions implicated previously in action encoding and/or motor representation. Among these regions, only activity within the right intraparietal sulcus predicts the accuracy with which observed procedures are subsequently performed. Successful formation of motor representations of sequential procedures through observational learning is dependent on computations implemented within this parietal region. PMID:17182769

  3. Audiovisual Aids and Techniques in Managerial and Supervisory Training.

    ERIC Educational Resources Information Center

    Rigg, Robinson P.

    An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…

  4. Media Literacy and Audiovisual Languages: A Case Study from Belgium

    ERIC Educational Resources Information Center

    Van Bauwel, Sofie

    2008-01-01

    This article examines the use of media in the construction of a "new" language for children. We studied how children acquire and use media literacy skills through their engagement in an educational art project. This media literacy project is rooted in the realm of audiovisual media, within which children's sound and visual worlds are the focus of…

  5. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  6. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  7. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.

  8. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  9. Selected Audio-Visual Materials for Consumer Education. [New Version.

    ERIC Educational Resources Information Center

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  10. Audiovisual Integration in Noise by Children and Adults

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  11. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation. PMID:26351991

  12. The Audio-Visual Equipment Directory. Seventeenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    The following types of audiovisual equipment are catalogued: 8 mm. and 16 mm. motion picture projectors, filmstrip and sound filmstrip projectors, slide projectors, random access projection equipment, opaque, overhead, and micro-projectors, record players, special purpose projection equipment, audio tape recorders and players, audio tape…

  13. Audio-Visual Equipment Depreciation. RDU-75-07.

    ERIC Educational Resources Information Center

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  14. Selected Bibliography and Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This guide to resource materials on environmental education is in two sections: 1) Selected Bibliography of Printed Materials, compiled in April, 1970; and, 2) Audio-Visual materials, Films and Filmstrips, compiled in February, 1971. 99 book annotations are given with an indicator of elementary, junior or senior high school levels. Other book…

  15. Sur Quatre Methodes Audio-Visuelles (On Four Audiovisual Methods)

    ERIC Educational Resources Information Center

    Porquier, Remy; Vives, Robert

    1974-01-01

    This is a critical examination of four audiovisual methods for the teaching of French as a Foreign Language. The methods have as a common basis the interrelationship of image, dialogue, situation, and give grammar priority over vocabulary. (Text is in French.) (AM)

  16. Audiovisual Fundamentals; Basic Equipment Operation and Simple Materials Production.

    ERIC Educational Resources Information Center

    Bullard, John R.; Mether, Calvin E.

    A guide illustrated with simple sketches explains the functions and step-by-step uses of audiovisual (AV) equipment. Principles of projection, audio, AV equipment, lettering, limited-quantity and quantity duplication, and materials preservation are outlined. Apparatus discussed include overhead, opaque, slide-filmstrip, and multiple-loading slide…

  17. An Audio-Visual Lecture Course in Russian Culture

    ERIC Educational Resources Information Center

    Leighton, Lauren G.

    1977-01-01

    An audio-visual course in Russian culture is given at Northern Illinois University. A collection of 4-5,000 color slides is the basis for the course, with lectures focussed on literature, philosophy, religion, politics, art and crafts. Acquisition, classification, storage and presentation of slides, and organization of lectures are discussed. (CHK)

  18. Searching AVLINE for Curriculum-Related Audiovisual Instructional Materials.

    ERIC Educational Resources Information Center

    Bridgman, Charles F.; Suter, Emanuel

    1979-01-01

    Ways in which the National Library of Medicine's online data file of audiovisual instructional materials (AVLINE) can be searched are described. The search approaches were developed with the assistance of data analysts at NLM trained in reference services. AVLINE design, search strategies, and acquisition of the materials are reported. (LBH)

  19. Guide to Audiovisual Terminology. Product Information Supplement, Number 6.

    ERIC Educational Resources Information Center

    Trzebiatowski, Gregory, Ed.

    1968-01-01

    The terms appearing in this glossary have been specifically selected for use by educators from a larger text, which was prepared by the Commission on Definition and Terminology of the Department of Audiovisual Instruction of the National Education Association. Specialized areas covered in the glossary include audio reproduction, audiovisual…

  20. Context-specific effects of musical expertise on audiovisual integration.

    PubMed

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  1. Context-specific effects of musical expertise on audiovisual integration.

    PubMed

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  2. Context-specific effects of musical expertise on audiovisual integration

    PubMed Central

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  3. Problem-Based Learning and Use of Higher-Order Thinking by Emergency Medical Technicians

    ERIC Educational Resources Information Center

    Rosenberger, Paul

    2013-01-01

    Emergency Medical Technicians (EMTs) often handle chaotic life-and-death situations that require higher-order thinking skills. Improving the pass rate of EMT students depends on many factors, including the use of proven and effective teaching methods. Results from recent research about effective teaching have suggested that the instructional…

  4. Punjabis Learning English: Word Order. TEAL Occasional Papers, Vol. l, 1977.

    ERIC Educational Resources Information Center

    Seesahai, Maureen

    When teaching English as a second language to speakers of Punjabi, it is useful for the teacher to have some knowledge of the students' native language. This paper analyzes the differences in word order between English and Punjabi. The five basic sentence patterns in English are contrasted with the equivalent sentence patterns in Punjabi.…

  5. Audio/Visual Aids: A Study of the Effect of Audio/Visual Aids on the Comprehension Recall of Students.

    ERIC Educational Resources Information Center

    Bavaro, Sandra

    A study investigated whether the use of audio/visual aids had an effect upon comprehension recall. Thirty fourth-grade students from an urban public school were randomly divided into two equal samples of 15. One group was given a story to read (print only), while the other group viewed a filmstrip of the same story, thereby utilizing audio/visual…

  6. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    NASA Astrophysics Data System (ADS)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  7. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  8. The influence of trial order on learning from reward vs. punishment in a probabilistic categorization task: experimental and computational analyses

    PubMed Central

    Moustafa, Ahmed A.; Gluck, Mark A.; Herzallah, Mohammad M.; Myers, Catherine E.

    2015-01-01

    Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward

  9. The influence of trial order on learning from reward vs. punishment in a probabilistic categorization task: experimental and computational analyses.

    PubMed

    Moustafa, Ahmed A; Gluck, Mark A; Herzallah, Mohammad M; Myers, Catherine E

    2015-01-01

    Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward

  10. The influence of trial order on learning from reward vs. punishment in a probabilistic categorization task: experimental and computational analyses.

    PubMed

    Moustafa, Ahmed A; Gluck, Mark A; Herzallah, Mohammad M; Myers, Catherine E

    2015-01-01

    Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward

  11. Simulation of Parkinsonian gait by fusing trunk learned patterns and a lower limb first order model

    NASA Astrophysics Data System (ADS)

    Cárdenas, Luisa; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    Parkinson's disease is a neurodegenerative disorder that progressively affects the movement. Gait analysis is therefore crucial to determine a disease degree as well as to orient the diagnosis. However, gait examination is completely subjective and therefore prone to errors or misinterpretations, even with a great expertise. In addition, the conventional evaluation follows up general gait variables, which amounts to ignore subtle changes that definitely can modify the history of the treatment. This work presents a functional gait model that simulates the center of gravity trajectory (CoG) for different Parkinson disease stages. This model mimics the gait trajectory by coupling two models: a double pendulum (single stance phase) and a spring-mass model (double stance). Realistic simulations for different Parkinson disease stages are then obtained by integrating to the model a set of trunk bending patterns, learned from real patients. The proposed model was compared with the CoG of real Parkinson gaits in stages 2, 3, 4 achieving a correlation coefficient of 0.88, 0.92 and 0.86, respectively.

  12. Psychometric testing of the Pecka Grading Rubric for evaluating higher-order thinking in distance learning.

    PubMed

    Pecka, Shannon; Schmid, Kendra; Pozehl, Bunny

    2014-12-01

    This article describes development of the Pecka Grading Rubric (PGR) as a strategy to facilitate and evaluate students' higher-order thinking in discussion boards. The purpose of this study was to describe psychometric properties of the PGR. Rubric reliability was pilot tested on a discussion board assignment used by 15 senior student registered nurse anesthetist enrolled in an Advanced Principles of Anesthesia course. Interrater and intrarater reliabilities were tested using an interclass correlation coefficient (ICC) to evaluate absolute agreement of scoring. Raters gave each category a score, scores of the categories were summed, and a total score was calculated for the entire rubric. Interrater (ICC = 0.939, P < .001) and intrarater (ICC = 0.902 to 0.994, P < .001) reliabilities were excellent for total point scores. A content validity index was used to evaluate content validity. Raters evaluated content validity of each cell of the PGR. The content validity index (0.8-1.0) was acceptable. Known-group validity was evaluated by comparing graduate student registered nurse anesthetists (N = 7) with undergraduate senior nursing students (N = 13). Beginning evidence indicates a valid and reliable instrument that measures higher-order thinking in the student registered nurse anesthetist.

  13. Psychometric testing of the Pecka Grading Rubric for evaluating higher-order thinking in distance learning.

    PubMed

    Pecka, Shannon; Schmid, Kendra; Pozehl, Bunny

    2014-12-01

    This article describes development of the Pecka Grading Rubric (PGR) as a strategy to facilitate and evaluate students' higher-order thinking in discussion boards. The purpose of this study was to describe psychometric properties of the PGR. Rubric reliability was pilot tested on a discussion board assignment used by 15 senior student registered nurse anesthetist enrolled in an Advanced Principles of Anesthesia course. Interrater and intrarater reliabilities were tested using an interclass correlation coefficient (ICC) to evaluate absolute agreement of scoring. Raters gave each category a score, scores of the categories were summed, and a total score was calculated for the entire rubric. Interrater (ICC = 0.939, P < .001) and intrarater (ICC = 0.902 to 0.994, P < .001) reliabilities were excellent for total point scores. A content validity index was used to evaluate content validity. Raters evaluated content validity of each cell of the PGR. The content validity index (0.8-1.0) was acceptable. Known-group validity was evaluated by comparing graduate student registered nurse anesthetists (N = 7) with undergraduate senior nursing students (N = 13). Beginning evidence indicates a valid and reliable instrument that measures higher-order thinking in the student registered nurse anesthetist. PMID:25842643

  14. Your Most Essential Audiovisual Aid--Yourself!

    ERIC Educational Resources Information Center

    Hamp-Lyons, Elizabeth

    2012-01-01

    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…

  15. Audio-visual interactions in product sound design

    NASA Astrophysics Data System (ADS)

    Özcan, Elif; van Egmond, René

    2010-02-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral part of the main product concept. Because visual aspects of a product are considered to dominate the communication of the desired product concept, sound is usually expected to fit the visual character of a product. We argue that this can be accomplished successfully only on basis of a thorough understanding of the impact of audio-visual interactions on product sounds. Two experimental studies are reviewed to show audio-visual interactions on both perceptual and cognitive levels influencing the way people encode, recall, and attribute meaning to product sounds. Implications for sound design are discussed defying the natural tendency of product designers to analyze the "sound problem" in isolation from the other product properties.

  16. Audio-visual communication and its use in palliative care.

    PubMed

    Coyle, Nessa; Khojainova, Natalia; Francavilla, John M; Gonzales, Gilbert R

    2002-02-01

    The technology of telemedicine has been used for over 20 years, involving different areas of medicine, providing medical care for the geographically isolated patients, and uniting geographically isolated clinicians. Today audio-visual technology may be useful in palliative care for the patients lacking access to medical services due to the medical condition rather than geographic isolation. We report results of a three-month trial of using audio-visual communications as a complementary tool in care for a complex palliative care patient. Benefits of this system to the patient included 1) a daily limited physical examination, 2) screening for a need for a clinical visit or admission, 3) lip reading by the deaf patient, 4) satisfaction by the patient and the caregivers with this form of communication as a complement to telephone communication. A brief overview of the historical prospective on telemedicine and a listing of applied telemedicine programs are provided.

  17. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  18. Neural development of networks for audiovisual speech comprehension

    PubMed Central

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2009-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the neurobiological substrate in the child compares to the adult is unknown. In particular, developmental differences in the network for audiovisual speech comprehension could manifest though the incorporation of additional brain regions, or through different patterns of effective connectivity. In the present study we used functional magnetic resonance imaging and structural equation modeling (SEM) to characterize the developmental changes in network interactions for audiovisual speech comprehension. The brain response was recorded while children 8- to 11-years-old and adults passively listened to stories under audiovisual (AV) and auditory-only (A) conditions. Results showed that in children and adults, AV comprehension activated the same fronto-temporo-parietal network of regions known for their contribution to speech production and perception. However, the SEM network analysis revealed age-related differences in the functional interactions among these regions. In particular, the influence of the posterior inferior frontal gyrus/ventral premotor cortex on supramarginal gyrus differed across age groups during AV, but not A speech. This functional pathway might be important for relating motor and sensory information used by the listener to identify speech sounds. Further, its development might reflect changes in the mechanisms that relate visual speech information to articulatory speech representations through experience producing and perceiving speech. PMID:19781755

  19. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    NASA Astrophysics Data System (ADS)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  20. The role of emotion in dynamic audiovisual integration of faces and voices.

    PubMed

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration.

  1. Audiovisual integration of speech in a patient with Broca's Aphasia

    PubMed Central

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  2. Multi-pose lipreading and audio-visual speech recognition

    NASA Astrophysics Data System (ADS)

    Estellers, Virginia; Thiran, Jean-Philippe

    2012-12-01

    In this article, we study the adaptation of visual and audio-visual speech recognition systems to non-ideal visual conditions. We focus on overcoming the effects of a changing pose of the speaker, a problem encountered in natural situations where the speaker moves freely and does not keep a frontal pose with relation to the camera. To handle these situations, we introduce a pose normalization block in a standard system and generate virtual frontal views from non-frontal images. The proposed method is inspired by pose-invariant face recognition and relies on linear regression to find an approximate mapping between images from different poses. We integrate the proposed pose normalization block at different stages of the speech recognition system and quantify the loss of performance related to pose changes and pose normalization techniques. In audio-visual experiments we also analyze the integration of the audio and visual streams. We show that an audio-visual system should account for non-frontal poses and normalization techniques in terms of the weight assigned to the visual stream in the classifier.

  3. Audiovisual integration of speech in a patient with Broca's Aphasia.

    PubMed

    Andersen, Tobias S; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  4. Infants' preference for native audiovisual speech dissociated from congruency preference.

    PubMed

    Shaw, Kathleen; Baart, Martijn; Depowski, Nicole; Bortfeld, Heather

    2015-01-01

    Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  5. Audiovisual Delay as a Novel Cue to Visual Distance

    PubMed Central

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R.; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance. PMID:26509795

  6. The development of the perception of audiovisual simultaneity.

    PubMed

    Chen, Yi-Chuan; Shore, David I; Lewis, Terri L; Maurer, Daphne

    2016-06-01

    We measured the typical developmental trajectory of the window of audiovisual simultaneity by testing four age groups of children (5, 7, 9, and 11 years) and adults. We presented a visual flash and an auditory noise burst at various stimulus onset asynchronies (SOAs) and asked participants to report whether the two stimuli were presented at the same time. Compared with adults, children aged 5 and 7 years made more simultaneous responses when the SOAs were beyond ± 200 ms but made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity was located at the visual-leading side, as in adults, by 5 years of age, the youngest age tested. However, the window of audiovisual simultaneity became narrower and response errors decreased with age, reaching adult levels by 9 years of age. Experiment 2 ruled out the possibility that the adult-like performance of 9-year-old children was caused by the testing of a wide range of SOAs. Together, the results demonstrate that the adult-like precision of perceiving audiovisual simultaneity is developed by 9 years of age, the youngest age that has been reported to date.

  7. Audiovisual integration of speech in a patient with Broca's Aphasia.

    PubMed

    Andersen, Tobias S; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia.

  8. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  9. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    PubMed

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.

  10. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy.

    PubMed

    Fava, Eswen; Hull, Rachel; Bortfeld, Heather

    2014-01-01

    Initially, infants are capable of discriminating phonetic contrasts across the world's languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech). Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity. PMID:25116572

  11. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    PubMed Central

    Fava, Eswen; Hull, Rachel; Bortfeld, Heather

    2014-01-01

    Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech). Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity. PMID:25116572

  12. Employing Transformative Learning Theory in the Design and Implementation of a Curriculum for Court-Ordered Participants in a Parent Education Class

    ERIC Educational Resources Information Center

    Taylor, Mariann B.; Hill, Lilian H.

    2016-01-01

    This study sought to analyze the experiences of participants in court-ordered parent education with the ultimate goal to identify a framework, which promotes learning that is transformative. Participants included 11 parents court ordered to attend parent education classes through the Department of Human Services. A basic qualitative design, which…

  13. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    PubMed Central

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  14. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    PubMed

    Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  15. Heart House: Where Doctors Learn

    ERIC Educational Resources Information Center

    American School and University, 1978

    1978-01-01

    The new learning center and administrative headquarters of the American College of Cardiology in Bethesda, Maryland, contain a unique classroom equipped with the highly sophisticated audiovisual aids developed to teach the latest techniques in the diagnosis and treatment of heart disease. (Author/MLF)

  16. Networked Learning in 70001 Programs.

    ERIC Educational Resources Information Center

    Fine, Marija Futchs

    The 7000l Training and Employment Institute offers self-paced instruction through the use of computers and audiovisual materials to young people to improve opportunities for success in the work force. In 1988, four sites were equipped with Apple stand-alone software in an integrated learning system that included courses in reading and math, test…

  17. Les Moyens Audio-Visuels et la Strategie Pedagogique (Audiovisual Methods and Pedagogical Strategy). Melanges Pedagogiques, 1971.

    ERIC Educational Resources Information Center

    Holec, H.

    This article discusses the relationship between audiovisual methods and pedagogical strategies, or between technology and instruction, in second language teaching. Currently, the relationship between audiovisual methods and language instruction is one in which the audiovisual component is subservient, and plays a supplementary rather than a…

  18. Training Methodology, Part IV: Audiovisual Theory, Aids and Equipment. An Annotated Bibliography. Public Health Service Publication No. 1862, Part IV.

    ERIC Educational Resources Information Center

    Health Services and Mental Health Administration (DHEW), Bethesda, MD.

    A total of 332 annotated references pertaining to media aspects of training are organized under the following headings: (1) Audiovisual Theory and Research, (2) Audiovisual Methods (General), (3) Audiovisual Equipment (General), (4) Computers in Instruction, (5) Television Instruction, (6) Videotape Recordings, (7) Television Facilities, (8) Radio…

  19. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have...

  20. A Citation Comparison of Sourcebooks for Audiovisuals to AVLINE Records: Access and the Chief Source of Information.

    ERIC Educational Resources Information Center

    Weimer, Katherine Hart

    1994-01-01

    Discusses cataloging audiovisual materials and the concept of chief source of information and describes a study that compared citations from fully cataloged audiovisual records with their corresponding citations from bibliographic sourcebooks, based on records in AVLINE (National Library of Medicine's Audiovisual On-Line Catalog). Examples of…

  1. Mobile Guide System Using Problem-Solving Strategy for Museum Learning: A Sequential Learning Behavioural Pattern Analysis

    ERIC Educational Resources Information Center

    Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.

    2010-01-01

    Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…

  2. Audiovisual Equipment in Educational Facilities Today. AVE in Japan No. 29.

    ERIC Educational Resources Information Center

    Japan Audiovisual Information Center for International Service, Tokyo.

    This report summarizes a 1989 update of a 1986 survey on the diffusion and utilization of audiovisual media and equipment in Japan. A comparison of the two reveals the advancements in types of audiovisual equipment available to schools and social education facilities in Japan which have developed in only 3 years. An outline of the equipment…

  3. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep an..., including captions and published and unpublished catalogs, inventories, indexes, and production files and similar documentation created in the course of audiovisual production. Establish and communicate...

  4. An Analysis of Audiovisual Machines for Individual Program Presentation. Research Memorandum Number Two.

    ERIC Educational Resources Information Center

    Finn, James D.; Weintraub, Royd

    The Medical Information Project (MIP) purpose to select the right type of audiovisual equipment for communicating new medical information to general practitioners of medicine was hampered by numerous difficulties. There is a lack of uniformity and standardization in audiovisual equipment that amounts to chaos. There is no evaluative literature on…

  5. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  6. Age-related audiovisual interactions in the superior colliculus of the rat.

    PubMed

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions.

  7. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and...

  8. Twice upon a time: multiple concurrent temporal recalibrations of audiovisual speech.

    PubMed

    Roseboom, Warrick; Arnold, Derek H

    2011-07-01

    Audiovisual timing perception can recalibrate following prolonged exposure to asynchronous auditory and visual inputs. It has been suggested that this might contribute to achieving perceptual synchrony for auditory and visual signals despite differences in physical and neural signal times for sight and sound. However, given that people can be concurrently exposed to multiple audiovisual stimuli with variable neural signal times, a mechanism that recalibrates all audiovisual timing percepts to a single timing relationship could be dysfunctional. In the experiments reported here, we showed that audiovisual temporal recalibration can be specific for particular audiovisual pairings. Participants were shown alternating movies of male and female actors containing positive and negative temporal asynchronies between the auditory and visual streams. We found that audiovisual synchrony estimates for each actor were shifted toward the preceding audiovisual timing relationship for that actor and that such temporal recalibrations occurred in positive and negative directions concurrently. Our results show that humans can form multiple concurrent estimates of appropriate timing for audiovisual synchrony.

  9. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  10. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    PubMed

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  11. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    ERIC Educational Resources Information Center

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  12. A Management Review and Analysis of Purdue University Libraries and Audio-Visual Center.

    ERIC Educational Resources Information Center

    Baaske, Jan; And Others

    A management review and analysis was conducted by the staff of the libraries and audio-visual center of Purdue University. Not only were the study team and the eight task forces drawn from all levels of the libraries and audio-visual center staff, but a systematic effort was sustained through inquiries, draft reports and open meetings to involve…

  13. Audio-Visual Techniques for Industry. Development and Transfer of Technology Series No. 6.

    ERIC Educational Resources Information Center

    Halas, John; Martin-Harris, Roy

    Intended for use by persons in developing countries responsible for initiating or expanding the use of audiovisual facilities and techniques in industry, this manual is designed for those who have limited background in audiovisuals but need detailed information about how certain techniques may be employed in an economical, efficient way. Part one,…

  14. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false What are the environmental standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public...

  15. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    PubMed Central

    Tse, Regina; Martin, Darren; McLean, Lisa; Cho, Gwi; Hill, Robin; Pickard, Sheila; Aston, Paul; Huang, Chen‐Yu; Makhija, Kuldeep; O'Brien, Ricky; Keall, Paul

    2015-01-01

    Summary This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed. PMID:26247520

  16. A Team Approach to Developing an Audiovisual Single-Concept Instructional Unit.

    ERIC Educational Resources Information Center

    Brooke, Martha L.; And Others

    1974-01-01

    In 1973, the National Medical Audiovisual Center undertook the production of several audiovisual teaching units, each addressing a single-concept, using a team approach. The production team on the unit "Left Ventricle Catheterization" were a physiologist acting as content specialist, an artist and film producer as production specialist, and an…

  17. Audiovisual Materials in Archives--A General Picture of Their Role and Function.

    ERIC Educational Resources Information Center

    Booms, Hans

    Delivered on behalf of the International Council of Archives (ICA), this paper briefly discusses the challenge inherent in the processing and preservation of audiovisual materials, the types of media included in the term audiovisual, the concerns of professional archivists, the development and services of archival institutions, the utilization of…

  18. Planning Schools for Use of Audio-Visual Materials. No. 1--Classrooms, 3rd Edition.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    Intended to inform school board administrators and teachers of the current (1958) thinking on audio-visual instruction for use in planning new buildings, purchasing equipment, and planning instruction. Attention is given the problem of overcoming obstacles to the incorporation of audio-visual materials into the curriculum. Discussion includes--(1)…

  19. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    ERIC Educational Resources Information Center

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  20. The level of audiovisual print-speech integration deficits in dyslexia.

    PubMed

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  1. Musical expertise is related to altered functional connectivity during audiovisual integration.

    PubMed

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D; Pantev, Christo

    2015-10-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources' activity, and the corresponding networks were statistically compared. Nonmusicians' results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians' results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity.

  2. Musical expertise is related to altered functional connectivity during audiovisual integration

    PubMed Central

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo

    2015-01-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  3. Seeing and hearing rotated faces: influences of facial orientation on visual and audiovisual speech recognition.

    PubMed

    Jordan, T R; Bevan, K

    1997-04-01

    It is well-known that facial orientation affects the processing of static facial information, but similar effects on the processing of visual speech have yet to be explored fully. Three experiments are reported in which the effects of facial orientation on visual speech processing were examined using a talking face presented at 8 orientations through 360 degrees. Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /ma/, /mi/, /ta/, and /ti/ were used to produce the following speech stimulus types: auditory, visual, congruent audiovisual, and incongruent audiovisual. Facial orientation did not affect identification of visual speed per se or the near-perfect accuracy of auditory speech report with congruent audiovisual speech stimuli. However, facial orientation did affect the accuracy of auditory speech report with incongruent audiovisual speech stimuli. Moreover, the nature of this effect depended on the type of incongruent visual speech used. Implications for the processing of visual and audiovisual speech are discussed. PMID:9104001

  4. Musical expertise is related to altered functional connectivity during audiovisual integration.

    PubMed

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D; Pantev, Christo

    2015-10-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources' activity, and the corresponding networks were statistically compared. Nonmusicians' results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians' results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  5. Neural substrate for higher-order learning in an insect: Mushroom bodies are necessary for configural discriminations.

    PubMed

    Devaud, Jean-Marc; Papouin, Thomas; Carcaud, Julie; Sandoz, Jean-Christophe; Grünewald, Bernd; Giurfa, Martin

    2015-10-27

    Learning theories distinguish elemental from configural learning based on their different complexity. Although the former relies on simple and unambiguous links between the learned events, the latter deals with ambiguous discriminations in which conjunctive representations of events are learned as being different from their elements. In mammals, configural learning is mediated by brain areas that are either dispensable or partially involved in elemental learning. We studied whether the insect brain follows the same principles and addressed this question in the honey bee, the only insect in which configural learning has been demonstrated. We used a combination of conditioning protocols, disruption of neural activity, and optophysiological recording of olfactory circuits in the bee brain to determine whether mushroom bodies (MBs), brain structures that are essential for memory storage and retrieval, are equally necessary for configural and elemental olfactory learning. We show that bees with anesthetized MBs distinguish odors and learn elemental olfactory discriminations but not configural ones, such as positive and negative patterning. Inhibition of GABAergic signaling in the MB calyces, but not in the lobes, impairs patterning discrimination, thus suggesting a requirement of GABAergic feedback neurons from the lobes to the calyces for nonelemental learning. These results uncover a previously unidentified role for MBs besides memory storage and retrieval: namely, their implication in the acquisition of ambiguous discrimination problems. Thus, in insects as in mammals, specific brain regions are recruited when the ambiguity of learning tasks increases, a fact that reveals similarities in the neural processes underlying the elucidation of ambiguous tasks across species.

  6. Visual Mislocalization of Moving Objects in an Audiovisual Event

    PubMed Central

    Kawachi, Yousuke

    2016-01-01

    The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD). In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance) were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects’ closest distance biased judgments toward “non-overlapping,” and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments) was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift). These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies. PMID:27111759

  7. Utilization of audio-visual aids by family welfare workers.

    PubMed

    Naik, V R; Jain, P K; Sharma, B B

    1977-01-01

    Communication efforts have been an important component of the Indian Family Planning Welfare Program since its inception. However, its chief interests in its early years were clinical, until the adoption of the extension approach in 1963. Educational materials were developed, especially in the period 1965-8, to fit mass, group meeting and home visit approaches. Audiovisual aids were developed for use by extension workers, who had previously relied entirely on verbal approaches. This paper examines their use. A questionnaire was designed for workers in motivational programs at 3 levels: Village Level (Family Planning Health Assistant, Auxilliary Nurse-Midwife, Dias), Block Level (Public Health Nurse, Lady Health Visitor, Block Extension Educator), and District (District Extension Educator, District Mass Education and Information Officer). 3 Districts were selected from each State on the basis of overall family planning performance during 1970-2 (good, average, or poor). Units of other agencies were also included on the same basis. Findings: 1) Workers in all 3 categories preferred individual contacts over group meetings or mass approach. 2) 56-64% said they used audiovisual aids "sometimes" (when available). 25% said they used them "many times" and only 15.9% said "rarely." 3) More than 1/2 of workers in each category said they were not properly oriented toward the use of audiovisual aids. Nonavailability of the aids in the market was also cited. About 1/3 of village level and 1/2 of other workers said that the materials were heavy and liable to be damaged. Complexity, inaccuracy and confusion in use were not widely cited (less than 30%).

  8. Development of sensitivity to audiovisual temporal asynchrony during midchildhood.

    PubMed

    Kaganovich, Natalya

    2016-02-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7- to 8-year-olds, 10- to 11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether nonverbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2-kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs): 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition), and in the other half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of reaction time (RT) at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10- to 11-year-olds outperforming 7- to 8-year-olds at the 300- to 500-ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function-such as autism, specific language impairment, and dyslexia-may be compared. PMID:26569563

  9. "Singing in the Tube"--audiovisual assay of plant oil repellent activity against mosquitoes (Culex pipiens).

    PubMed

    Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang

    2016-01-01

    Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses.

  10. "Singing in the Tube"--audiovisual assay of plant oil repellent activity against mosquitoes (Culex pipiens).

    PubMed

    Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang

    2016-01-01

    Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses. PMID:26412058

  11. Sources of Confusion in Infant Audiovisual Speech Perception Research.

    PubMed

    Shaw, Kathleen E; Bortfeld, Heather

    2015-01-01

    Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners-infants-are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding.

  12. Sources of Confusion in Infant Audiovisual Speech Perception Research

    PubMed Central

    Shaw, Kathleen E.; Bortfeld, Heather

    2015-01-01

    Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners—infants—are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding. PMID:26696919

  13. Lexical and context effects in children's audiovisual speech recognition

    NASA Astrophysics Data System (ADS)

    Holt, Rachael; Kirk, Karen; Pisoni, David; Burckhartzmeyer, Lisa; Lin, Anna

    2005-09-01

    The Audiovisual Lexical Neighborhood Sentence Test (AVLNST), a new, recorded speech recognition test for children with sensory aids, was administered in multiple presentation modalities to children with normal hearing and vision. Each sentence consists of three key words whose lexical difficulty is controlled according to the Neighborhood Activation Model (NAM) of spoken word recognition. According to NAM, the recognition of spoken words is influenced by two lexical factors: the frequency of occurrence of individual words in a language, and how phonemically similar the target word is to other words in the listeners lexicon. These predictions are based on auditory similarity only, and thus do not take into account how visual information can influence the perception of speech. Data from the AVLNST, together with those from recorded audiovisual versions of isolated word recognition measures, the Lexical Neighborhood, and the Multisyllabic Lexical Neighborhood Tests, were used to examine the influence of visual information on speech perception in children. Further, the influence of top-down processing on speech recognition was examined by evaluating performance on the recognition of words in isolation versus words in sentences. [Work supported by the American Speech-Language-Hearing Foundation, the American Hearing Research Foundation, and the NIDCD, T32 DC00012 to Indiana University.

  14. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  15. Depth Cues and Perceived Audiovisual Synchrony of Biological Motion

    PubMed Central

    Silva, Carlos César; Mendonça, Catarina; Mouta, Sandra; Silva, Rosa; Campos, José Creissac; Santos, Jorge

    2013-01-01

    Background Due to their different propagation times, visual and auditory signals from external events arrive at the human sensory receptors with a disparate delay. This delay consistently varies with distance, but, despite such variability, most events are perceived as synchronic. There is, however, contradictory data and claims regarding the existence of compensatory mechanisms for distance in simultaneity judgments. Principal Findings In this paper we have used familiar audiovisual events – a visual walker and footstep sounds – and manipulated the number of depth cues. In a simultaneity judgment task we presented a large range of stimulus onset asynchronies corresponding to distances of up to 35 meters. We found an effect of distance over the simultaneity estimates, with greater distances requiring larger stimulus onset asynchronies, and vision always leading. This effect was stronger when both visual and auditory cues were present but was interestingly not found when depth cues were impoverished. Significance These findings reveal that there should be an internal mechanism to compensate for audiovisual delays, which critically depends on the depth information available. PMID:24244617

  16. Head Tracking of Auditory, Visual, and Audio-Visual Targets

    PubMed Central

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2016-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual “bisensory” stimuli. Three metrics were measured—onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952

  17. Implications of Self-Deception for Self-Reported Intrinsic and Extrinsic Motivational Dispositions and Actual Learning Performance: A Higher Order Structural Model

    ERIC Educational Resources Information Center

    Hirschfeld, Robert R.; Thomas, Christopher H.; McNatt, D. Brian

    2008-01-01

    The authors explored implications of individuals' self-deception (a trait) for their self-reported intrinsic and extrinsic motivational dispositions and their actual learning performance. In doing so, a higher order structural model was developed and tested in which intrinsic and extrinsic motivational dispositions were underlying factors that…

  18. Implicit Sequence Learning in Dyslexia: A Within-Sequence Comparison of First- and Higher-Order Information

    ERIC Educational Resources Information Center

    Du, Wenchong; Kelly, Steve W.

    2013-01-01

    The present study examines implicit sequence learning in adult dyslexics with a focus on comparing sequence transitions with different statistical complexities. Learning of a 12-item deterministic sequence was assessed in 12 dyslexic and 12 non-dyslexic university students. Both groups showed equivalent standard reaction time increments when the…

  19. How I Came to Understand that My Students Would Need Training Wings in Order to Learn to Fly

    ERIC Educational Resources Information Center

    Corrigan, Paul T.

    2011-01-01

    The author began his first year teaching at an open-enrollment university with the belief that "most students can learn to do intellectual work, if they are only given the opportunity." This belief is inspired by the research on teaching and learning and is rooted in the characteristic idealism of teachers. He had seen the principle borne out in…

  20. A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word-Order Universal

    ERIC Educational Resources Information Center

    Culbertson, Jennifer; Smolensky, Paul

    2012-01-01

    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners' input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized…

  1. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    PubMed

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  2. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    PubMed

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  3. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    PubMed

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  4. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    PubMed

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.

  5. The role of the posterior superior temporal sulcus in audiovisual processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2008-10-01

    In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

  6. Temporal Processing of Audiovisual Stimuli Is Enhanced in Musicians: Evidence from Magnetoencephalography (MEG)

    PubMed Central

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014

  7. Linguistic experience and audio-visual perception of non-native fricatives.

    PubMed

    Wang, Yue; Behne, Dawn M; Jiang, Haisheng

    2008-09-01

    This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure [long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain.

  8. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  9. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation.

    PubMed

    Lusk, Laina G; Mitchel, Aaron D

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.

  10. Audiovisual speech perception and eye gaze behavior of adults with asperger syndrome.

    PubMed

    Saalasti, Satu; Kätsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-08-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face articulating /k/, the controls predominantly heard /k/. Instead, the AS group heard /k/ and /t/ with almost equal frequency, but with large differences between individuals. There were no differences in gaze direction or unisensory perception between the AS and control participants that could have contributed to the audiovisual differences. We suggest an explanation in terms of weak support from the motor system for audiovisual speech perception in AS.

  11. The development of sensorimotor influences in the audiovisual speech domain: some critical questions.

    PubMed

    Guellaï, Bahia; Streri, Arlette; Yeung, H Henny

    2014-01-01

    Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.

  12. Effects of audio-visual stimulation on the incidence of restraint ulcers on the Wistar rat

    NASA Technical Reports Server (NTRS)

    Martin, M. S.; Martin, F.; Lambert, R.

    1979-01-01

    The role of sensory simulation in restrained rats was investigated. Both mixed audio-visual and pure sound stimuli, ineffective in themselves, were found to cause a significant increase in the incidence of restraint ulcers in the Wistar Rat.

  13. The Evolution of Audio-Visual Education in the USA since 1945.

    ERIC Educational Resources Information Center

    Hitchens, Howard

    1979-01-01

    Explores the development of audiovisual instruction in the United States as an outgrowth of the industrial revolution and the development of more sophisticated communications technology in the mid twentieth century. (RAO)

  14. I can see, hear, and smell your fear: comparing olfactory and audiovisual media in fear communication.

    PubMed

    de Groot, Jasper H B; Semin, Gün R; Smeets, Monique A M

    2014-04-01

    Recent evidence suggests that humans can become fearful after exposure to olfactory fear signals, yet these studies have reported the effects of fear chemosignals without examining emotion-relevant input from traditional communication modalities (i.e., vision, audition). The question that we pursued here was therefore: How significant is an olfactory fear signal in the broader context of audiovisual input that either confirms or contradicts olfactory information? To test this, we manipulated olfactory (fear, no fear) and audiovisual (fear, no fear) information and demonstrated that olfactory fear signals were as potent as audiovisual fear signals in eliciting a fearful facial expression. Irrespective of confirmatory or contradictory audiovisual information, olfactory fear signals produced by senders induced fear in receivers outside of conscious access. These findings run counter to traditional views that emotions are communicated exclusively via visual and linguistic channels.

  15. Audiovisual emotional processing and neurocognitive functioning in patients with depression

    PubMed Central

    Doose-Grünefeld, Sophie; Eickhoff, Simon B.; Müller, Veronika I.

    2015-01-01

    Alterations in the processing of emotional stimuli (e.g., facial expressions, prosody, music) have repeatedly been reported in patients with major depression. Such impairments may result from the likewise prevalent executive deficits in these patients. However, studies investigating this relationship are rare. Moreover, most studies to date have only assessed impairments in unimodal emotional processing, whereas in real life, emotions are primarily conveyed through more than just one sensory channel. The current study therefore aimed at investigating multi-modal emotional processing in patients with depression and to assess the relationship between emotional and neurocognitive impairments. Fourty one patients suffering from major depression and 41 never-depressed healthy controls participated in an audiovisual (faces-sounds) emotional integration paradigm as well as a neurocognitive test battery. Our results showed that depressed patients were specifically impaired in the processing of positive auditory stimuli as they rated faces significantly more fearful when presented with happy than with neutral sounds. Such an effect was absent in controls. Findings in emotional processing in patients did not correlate with Beck’s depression inventory score. Furthermore, neurocognitive findings revealed significant group differences for two of the tests. The effects found in audiovisual emotional processing, however, did not correlate with performance in the neurocognitive tests. In summary, our results underline the diversity of impairments going along with depression and indicate that deficits found for unimodal emotional processing cannot trivially be generalized to deficits in a multi-modal setting. The mechanisms of impairments therefore might be far more complex than previously thought. Our findings furthermore contradict the assumption that emotional processing deficits in major depression are associated with impaired attention or inhibitory functioning. PMID

  16. The audio-visual revolution: do we really need it?

    PubMed

    Townsend, I

    1979-03-01

    In the United Kingdom, The audio-visual revolution has steadily gained converts in the nursing profession. Nurse tutor courses now contain information on the techniques of educational technology and schools of nursing increasingly own (or wish to own) many of the sophisticated electronic aids to teaching that abound. This is taking place at a time of hitherto inexperienced crisis and change. Funds have been or are being made available to buy audio-visual equipment. But its purchase and use relies on satisfying personal whim, prejudice or educational fashion, not on considerations of educational efficiency. In the rush of enthusiasm, the overwhelmed teacher (everywhere; the phenomenon is not confined to nursing) forgets to ask the searching, critical questions: 'Why should we use this aid?','How effective is it?','And, at what?'. Influential writers in this profession have repeatedly called for a more responsible attitude towards published research work of other fields. In an attempt to discover what is known about the answers to this group of questions, an eclectic look at media research is taken and the widespread dissatisfaction existing amongst international educational technologists is noted. The paper isolates out of the literature several causative factors responsible for the present state of affairs. Findings from the field of educational television are cited as representative of an aid which has had a considerable amount of time and research directed at it. The concluding part of the paper shows the decisions to be taken in using or not using educational media as being more complicated than might at first appear.

  17. Higher-order conditioning of taste-odor learning in rats: Evidence for the association between emotional aspects of gustatory information and olfactory information.

    PubMed

    Onuma, Takuya; Sakai, Nobuyuki

    2016-10-01

    Previous studies have shown that rats prefer an odor paired with saccharin solution to an odor paired with quinine solution (taste-odor learning). However, it remains unclear whether the odors are associated with the emotional (i.e., positive and/or negative hedonics) or qualitative (i.e., sweetness and/or bitterness) aspects of gustatory information. This study aimed to examine this question using higher-order conditioning paradigms: second-order conditioning (SOC) and sensory preconditioning (SPC). Adult Wistar rats were divided into SOC and SPC groups. Food flavors, purchased from a Japanese market, such as melon (0.05%), lemon (0.1%), vanilla (0.1%), and almond (0.1%), were randomly used as odors A, B, C, and D for each rat. The SOC group was exposed to 0.005M saccharin solutions with odor A and 0.02M quinine solutions with odor C in the first 5days of learning. Additionally, they were exposed to water with a mixture of odors A and B, and water with a mixture of odors C and D in the next 5days of learning. The order of these two learning sessions was reversed in the SPC group. We hypothesized that if odor was associated with the emotional, or qualitative, aspects of gustatory information, the SOC, or SPC groups, respectively, would prefer odor B to odor D. Our results showed that the SOC group preferred odor B to odor D, whereas the SPC group did not show any such preference. This suggests that odors may be primarily associated with emotion evoked by gustation in taste-odor learning. PMID:27342429

  18. Social Studies: K-9 Supplementary Learning Resources.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg. Curriculum Development Branch.

    This annotated bibliography contains approximately 350 citations of learning resources for the series of K-9 guides designed for the social studies curriculum in Manitoba, Canada (SO 014 225-231). Intended for teachers and students, the bibliography includes listings of guides, manuals, books, booklets, filmstrips, audiovisual kits, cassettes,…

  19. Computers as a Language Learning Tool.

    ERIC Educational Resources Information Center

    Ruschoff, Bernd

    1984-01-01

    Describes a computer-assisted language learning project at the University of Wuppertal (West Germany). It's hoped that teachers can overcome the two handicaps of the past--lack of teacher awareness of current audio-visual technical aids, as well as unsophisticated computer hardware--both problems by getting the opportunity to familiarize…

  20. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    PubMed

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  1. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.

  2. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    PubMed

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  3. Children with a history of SLI show reduced sensitivity to audiovisual temporal asynchrony: An ERP Study

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose We examined whether school-age children with a history of SLI (H-SLI), their typically developing (TD) peers, and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method 15 H-SLI children, 15 TD children, and 15 adults judged whether a flashed explosion-shaped figure and a 2 kHz pure tone occurred simultaneously. The stimuli were presented at 0, 100, 200, 300, 400, and 500 ms temporal offsets. This task was combined with EEG recordings. Results H-SLI children were profoundly less sensitive to temporal separations between auditory and visual modalities compared to their TD peers. Those H-SLI children who performed better at simultaneity judgment also had higher language aptitude. TD children were less accurate than adults, revealing a remarkably prolonged developmental course of the audiovisual temporal discrimination. Analysis of early ERP components suggested that poor sensory encoding was not a key factor in H-SLI children’s reduced sensitivity to audiovisual asynchrony. Conclusions Audiovisual temporal discrimination is impaired in H-SLI children and is still immature during mid-childhood in TD children. The present findings highlight the need for further evaluation of the role of atypical audiovisual processing in the development of SLI. PMID:24686922

  4. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics

    PubMed Central

    Carver, Frederick W.; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-01-01

    Abstract In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships. PMID:25599264

  5. The Use of System Thinking Concepts in Order to Assure Continuous Improvement of Project Based Learning Courses

    ERIC Educational Resources Information Center

    Arantes do Amaral, Joao Alberto; Gonçalves, Paulo

    2015-01-01

    This case study describes a continuous improvement experience, conducted from 2002 to 2014 in Sao Paulo, Brazil, within 47 Project-Based Learning MBA courses, involving approximately 1,400 students. The experience report will focus on four themes: (1) understanding the main dynamics present in MBA courses; (2) planning a systemic intervention in…

  6. Transfer of Short-Term Motor Learning across the Lower Limbs as a Function of Task Conception and Practice Order

    ERIC Educational Resources Information Center

    Stockel, Tino; Wang, Jinsung

    2011-01-01

    Interlimb transfer of motor learning, indicating an improvement in performance with one limb following training with the other, often occurs asymmetrically (i.e., from non-dominant to dominant limb or vice versa, but not both). In the present study, we examined whether interlimb transfer of the same motor task could occur asymmetrically and in…

  7. Learning.

    ERIC Educational Resources Information Center

    Glaser, Robert

    A report on learning psychology and its relationship to the study of school learning emphasizes the increasing interaction between theorists and educational practitioners, particularly in attempting to learn which variables influence the instructional process and to find an appropriate methodology to measure and evaluate learning. "Learning…

  8. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    PubMed

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing.

  9. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    PubMed

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. PMID:27392755

  10. Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension.

    PubMed

    Lee, HweeLing; Noppeney, Uta

    2011-08-01

    Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.

  11. Can personality traits predict pathological responses to audiovisual stimulation?

    PubMed

    Yambe, Tomoyuki; Yoshizawa, Makoto; Fukudo, Shin; Fukuda, Hiroshi; Kawashima, Ryuta; Shizuka, Kazuhiko; Nanka, Shunsuke; Tanaka, Akira; Abe, Ken-ichi; Shouji, Tomonori; Hongo, Michio; Tabayashi, Kouichi; Nitta, Shin-ichi

    2003-10-01

    pathophysiological reaction to the audiovisual stimulations. As for the photo sensitive epilepsy, it was reported to be only 5-10% for all patients. Therefore, 90% or more of the cause could not be determined in patients who started a morbid response. The results in this study suggest that the autonomic function was connected to the mental tendency of the objects. By examining such directivity, it is expected that subjects, which show morbid reaction to an audiovisual stimulation, can be screened beforehand. PMID:14572681

  12. Can personality traits predict pathological responses to audiovisual stimulation?

    PubMed

    Yambe, Tomoyuki; Yoshizawa, Makoto; Fukudo, Shin; Fukuda, Hiroshi; Kawashima, Ryuta; Shizuka, Kazuhiko; Nanka, Shunsuke; Tanaka, Akira; Abe, Ken-ichi; Shouji, Tomonori; Hongo, Michio; Tabayashi, Kouichi; Nitta, Shin-ichi

    2003-10-01

    pathophysiological reaction to the audiovisual stimulations. As for the photo sensitive epilepsy, it was reported to be only 5-10% for all patients. Therefore, 90% or more of the cause could not be determined in patients who started a morbid response. The results in this study suggest that the autonomic function was connected to the mental tendency of the objects. By examining such directivity, it is expected that subjects, which show morbid reaction to an audiovisual stimulation, can be screened beforehand.

  13. The Audio-Visual Services in Fifteen African Countries. Comparative Study on the Administration of Audio-Visual Services in Advanced and Developing Countries. Part Four. First Edition.

    ERIC Educational Resources Information Center

    Jongbloed, Harry J. L.

    As the fourth part of a comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded study reports on the African countries of Cameroun, Republic of Central Africa, Dahomey, Gabon, Ghana, Kenya, Libya, Mali, Nigeria, Rwanda, Senegal, Swaziland, Tunisia, Upper Volta and Zambia. Information…

  14. Discrete neural substrates underlie complementary audiovisual speech integration processes.

    PubMed

    Stevenson, Ryan A; VanDerKlok, Ross M; Pisoni, David B; James, Thomas W

    2011-04-01

    The ability to combine information from multiple sensory modalities into a single, unified percept is a key element in an organism's ability to interact with the external world. This process of perceptual fusion, the binding of multiple sensory inputs into a perceptual gestalt, is highly dependent on the temporal synchrony of the sensory inputs. Using fMRI, we identified two anatomically distinct brain regions in the superior temporal cortex, one involved with processing temporal-synchrony, and one with processing perceptual fusion of audiovisual speech. This dissociation suggests that the superior temporal cortex should be considered a "neuronal hub" composed of multiple discrete subregions that underlie an array of complementary low- and high-level multisensory integration processes. In this role, abnormalities in the structure and function of superior temporal cortex provide a possible common etiology for temporal-processing and perceptual-fusion deficits seen in a number of clinical populations, including individuals with autism spectrum disorder, dyslexia, and schizophrenia. PMID:21195198

  15. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  16. Difference between diotic and dichotic presentations on audiovisual synchronization

    NASA Astrophysics Data System (ADS)

    Nakamura, Rie; Aikawa, Kiyoaki

    2005-04-01

    Significant differences were found between diotic and dichotic presentations of auditory stimuli on audiovisual synchronization. Two types of moving sphere were used for visual stimuli; (a) a sphere moved straight from left to right at a constant velocity, and (b) it moved up and right and then turned down and right at the midpoint. Auditory stimuli included (1) a continuous pure tone (1000 Hz), (2) a sequence of two short tones of 30 ms (1000 Hz), (3) a sweep tone (500 Hz --> 1500 Hz), and (4) a Frequency-Modulated tone (500 Hz --> 1500 Hz --> 500 Hz). The duration of these stimuli was 240 ms each. Visual stimuli were presented on a Liquid-Crystal Display. Auditory stimuli were presented by headphones. The time difference between auditory and visual stimuli was changed between -360 ms and +360 ms. The sound source linearly moved from left to right in case of dichotic presentation. Subjective tests, using the forced-choice of synchronized or not, clarified the significant differences between diotic and dichotic presentations when the auditory stimulus (1) or (4) was presented with the visual stimulus (b). No significance was obtained for other combinations of auditory and visual stimuli. [Work supported by NTT.

  17. Audio-visual perception system for a humanoid robotic head.

    PubMed

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  18. Impact of language on functional connectivity for audiovisual speech integration.

    PubMed

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-Aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl's gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  19. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  20. Impact of language on functional connectivity for audiovisual speech integration

    PubMed Central

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  1. The influence of task on gaze during audiovisual speech perception

    NASA Astrophysics Data System (ADS)

    Buchan, Julie; Paré, Martin; Yurick, Micheal; Munhall, Kevin

    2001-05-01

    In natural conversation, visual and auditory information about speech not only provide linguistic information but also provide information about the identity and the emotional state of the speaker. Thus, listeners must process a wide range of information in parallel to understand the full meaning in a message. In this series of studies, we examined how different types of visual information conveyed by a speaker's face are processed by measuring the gaze patterns exhibited by subjects watching audiovisual recordings of spoken sentences. In three experiments, subjects were asked to judge the emotion and the identity of the speaker, and to report the words that they heard under different auditory conditions. As in previous studies, eye and mouth regions dominated the distribution of the gaze fixations. It was hypothesized that the eyes would attract more fixations for more social judgment tasks, rather than tasks which rely more on verbal comprehension. Our results support this hypothesis. In addition, the location of gaze on the face did not influence the accuracy of the perception of speech in noise.

  2. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  3. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    PubMed

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-06-30

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC.

  4. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    PubMed

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  5. Neurofunctional underpinnings of audiovisual emotion processing in teens with autism spectrum disorders.

    PubMed

    Doyle-Thomas, Krissy A R; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B C

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system.

  6. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    PubMed

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  7. Bibliographic control of audiovisuals: analysis of a cataloging project using OCLC.

    PubMed

    Curtis, J A; Davison, F M

    1985-04-01

    The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal. PMID:2581645

  8. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  9. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    PubMed Central

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  10. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition. PMID

  11. Learning about the Unfairgrounds: A 4th-Grade Teacher Introduces Her Students to Executive Order 9066

    ERIC Educational Resources Information Center

    Baydo-Reed, Katie

    2010-01-01

    Following the bombing of Pearl Harbor on Dec. 7, 1941, U.S. officials issued a series of proclamations that violated the civil and human rights of the vast majority of Japanese Americans in the United States--ostensibly to protect the nation from further Japanese aggression. The proclamations culminated in Executive Order 9066, which gave the…

  12. 36 CFR 1256.96 - What provisions apply to the transfer of USIA audiovisual records to the National Archives of the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...? The provisions of 44 U.S.C. 2107 and 36 CFR part 1228 apply to the transfer of USIA audiovisual... transfer of USIA audiovisual records to the National Archives of the United States? 1256.96 Section 1256.96... Information Agency Audiovisual Materials in the National Archives of the United States § 1256.96...

  13. 36 CFR 1256.98 - Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... obtain copies of USIA audiovisual records transferred to the National Archives of the United States? 1256... United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.98 Can I get access to and obtain copies of USIA audiovisual records transferred to the...

  14. Tones and numbers: a combined EEG-MEG study on the effects of musical expertise in magnitude comparisons of audiovisual stimuli.

    PubMed

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Foroglou, Nikolaos; Bamidis, Panagiotis; Pantev, Christo

    2014-11-01

    This study investigated the cortical responses underlying magnitude comparisons of multisensory stimuli and examined the effect that musical expertise has in this process. The comparative judgments were based on a newly learned rule binding the auditory and visual stimuli within the context of magnitude comparisons: "the higher the pitch of the tone, the larger the number presented." The cortical responses were measured by simultaneous MEG\\EEG recordings and a combined source analysis with individualized realistic head models was performed. Musical expertise effects were investigated by comparing musicians to non-musicians. Congruent audiovisual stimuli, corresponding to the newly learned rule, elicited activity in frontotemporal and occipital areas. In contrast, incongruent stimuli activated temporal and parietal regions. Musicians when compared with nonmusicians showed increased differences between congruent and incongruent stimuli in a prefrontal region, thereby indicating that music expertise may affect multisensory comparative judgments within a generalized representation of analog magnitude.

  15. Tones and numbers: a combined EEG-MEG study on the effects of musical expertise in magnitude comparisons of audiovisual stimuli.

    PubMed

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Foroglou, Nikolaos; Bamidis, Panagiotis; Pantev, Christo

    2014-11-01

    This study investigated the cortical responses underlying magnitude comparisons of multisensory stimuli and examined the effect that musical expertise has in this process. The comparative judgments were based on a newly learned rule binding the auditory and visual stimuli within the context of magnitude comparisons: "the higher the pitch of the tone, the larger the number presented." The cortical responses were measured by simultaneous MEG\\EEG recordings and a combined source analysis with individualized realistic head models was performed. Musical expertise effects were investigated by comparing musicians to non-musicians. Congruent audiovisual stimuli, corresponding to the newly learned rule, elicited activity in frontotemporal and occipital areas. In contrast, incongruent stimuli activated temporal and parietal regions. Musicians when compared with nonmusicians showed increased differences between congruent and incongruent stimuli in a prefrontal region, thereby indicating that music expertise may affect multisensory comparative judgments within a generalized representation of analog magnitude. PMID:24916460

  16. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    ERIC Educational Resources Information Center

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  17. Educational Practices in an Age of Information Innovation: Audiovisual Education Prize Winning Papers in 1998. AVE in Japan No. 38.

    ERIC Educational Resources Information Center

    Japan Audiovisual Information Center for International Service, Tokyo.

    This booklet contains two papers that won the 1998 Audiovisual Education Prize of the Japan Audio-Visual Education Association. "Nurturing the Ability To Live in an Advanced Information Network Society: Making the Most Effective Use of Networking" reports on a study of 7th grade students in Okazaki City (Japan). The focus was on an 8-month…

  18. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode. PMID:15694622

  19. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode. PMID:14664072

  20. The development of audio-visual materials to prepare patients for medical procedures: an oncology application.

    PubMed

    Carey, M; Schofield, P; Jefford, M; Krishnasamy, M; Aranda, S

    2007-09-01

    This paper describes a systematic process for the development of educational audio-visual materials that are designed to prepare patients for potentially threatening procedures. Literature relating to the preparation of patients for potentially threatening medical procedures, psychological theory, theory of diffusion of innovations and patient information was examined. Four key principles were identified as being important: (1) stakeholder consultation, (2) provision of information to prepare patients for the medical procedure, (3) evidence-based content, and (4) promotion of patient confidence. These principles are described along with an example of the development of an audio-visual resource to prepare patients for chemotherapy treatment. Using this example, practical strategies for the application of each of the principles are described. The principles and strategies described may provide a practical, evidence-based guide to the development of other types of patient audio-visual materials.

  1. Early and late beta-band power reflect audiovisual perception in the McGurk illusion

    PubMed Central

    Senkowski, Daniel; Keil, Julian

    2015-01-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13–30 Hz) at short (0–500 ms) and long (500–800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. PMID:25568160

  2. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    PubMed

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept.

  3. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    PubMed

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  4. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  5. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    PubMed

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256

  6. Disentangling unisensory and multisensory components in audiovisual integration using a novel multifrequency fMRI spectral analysis.

    PubMed

    Hertz, Uri; Amedi, Amir

    2010-08-15

    Events in the world are mediated through multiple sensory inputs which are processed separately in specific unisensory areas according to the division-of-labor principle, and then need to be integrated to create a unified percept. How this is done is still not clear. For instance, recent evidence showed crossmodal activation in primary areas. We developed a novel approach to study multisensory integration using multifrequency spectral analysis to investigate the processing of audio and visual streams in a multisensory context. Auditory and visual stimuli were delivered in the same experimental condition, each in different presentation frequencies, and thus could be detected by applying Fourier spectral analysis in their different presentation frequencies. The cochleotopic and retinotopic organization of primary auditory and visual areas were found to remain intact in spite of the multisensory context. Auditory responses were also found in the Precuneus, suggesting that it might be a new auditory area responsive to pure tone stimuli, and serving as one end of a novel sensory preference gradient stretching across POG to the calcarine sulcus. Additional audiovisual areal convergence was detected both in areas in the middle of sensory preference gradients, and in primary auditory areas. Interestingly, the in/out synchronization rate of the auditory and visual streams yielded a third interaction frequency, which could be analyzed independently to reveal higher-order audiovisual interaction responses. These results were detected in one compact and natural multisensory experimental condition, which has several advantages over previous approaches. The method can be further implemented to study any type of interaction, within and across sensory modalities.

  7. The Development of Audio-Visual Integration for Temporal Judgements.

    PubMed

    Adams, Wendy J

    2016-04-01

    Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4-11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality-they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or auditory

  8. Retinotopic effects during spatial audio-visual integration.

    PubMed

    Meienbrock, A; Naumer, M J; Doehrmann, O; Singer, W; Muckli, L

    2007-02-01

    The successful integration of visual and auditory stimuli requires information about whether visual and auditory signals originate from corresponding places in the external world. Here we report crossmodal effects of spatially congruent and incongruent audio-visual (AV) stimulation. Visual and auditory stimuli were presented from one of four horizontal locations in external space. Seven healthy human subjects had to assess the spatial fit of a visual stimulus (i.e. a gray-scaled picture of a cartoon dog) and a simultaneously presented auditory stimulus (i.e. a barking sound). Functional magnetic resonance imaging (fMRI) revealed two distinct networks of cortical regions that processed preferentially either spatially congruent or spatially incongruent AV stimuli. Whereas earlier visual areas responded preferentially to incongruent AV stimulation, higher visual areas of the temporal and parietal cortex (left inferior temporal gyrus [ITG], right posterior superior temporal gyrus/sulcus [pSTG/STS], left intra-parietal sulcus [IPS]) and frontal regions (left pre-central gyrus [PreCG], left dorsolateral pre-frontal cortex [DLPFC]) responded preferentially to congruent AV stimulation. A position-resolved analysis revealed three robust cortical representations for each of the four visual stimulus locations in retinotopic visual regions corresponding to the representation of the horizontal meridian in area V1 and at the dorsal and ventral borders between areas V2 and V3. While these regions of interest (ROIs) did not show any significant effect of spatial congruency, we found subregions within ROIs in the right hemisphere that showed an incongruency effect (i.e. an increased fMRI signal during spatially incongruent compared to congruent AV stimulation). We interpret this finding as a correlate of spatially distributed recurrent feedback during mismatch processing: whenever a spatial mismatch is detected in multisensory regions (such as the IPS), processing resources are re

  9. Audiovisual correspondence between musical timbre and visual shapes

    PubMed Central

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre. PMID:24910604

  10. The Development of Audio-Visual Integration for Temporal Judgements.

    PubMed

    Adams, Wendy J

    2016-04-01

    Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4-11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality-they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or auditory

  11. AUDIOVISUAL RESOURCES ON THE TEACHING PROCESS IN SURGICAL TECHNIQUE

    PubMed Central

    PUPULIM, Guilherme Luiz Lenzi; IORIS, Rafael Augusto; GAMA, Ricardo Ribeiro; RIBAS, Carmen Australia Paredes Marcondes; MALAFAIA, Osvaldo; GAMA, Mirnaluci

    2015-01-01

    Background: The development of didactic means to create opportunities to permit complete and repetitive viewing of surgical procedures is of great importance nowadays due to the increasing difficulty of doing in vivo training. Thus, audiovisual resources favor the maximization of living resources used in education, and minimize problems arising only with verbalism. Aim: To evaluate the use of digital video as a pedagogical strategy in surgical technique teaching in medical education. Methods: Cross-sectional study with 48 students of the third year of medicine, when studying in the surgical technique discipline. They were divided into two groups with 12 in pairs, both subject to the conventional method of teaching, and one of them also exposed to alternative method (video) showing the technical details. All students did phlebotomy in the experimental laboratory, with evaluation and assistance of the teacher/monitor while running. Finally, they answered a self-administered questionnaire related to teaching method when performing the operation. Results: Most of those who did not watch the video took longer time to execute the procedure, did more questions and needed more faculty assistance. The total exposed to video followed the chronology of implementation and approved the new method; 95.83% felt able to repeat the procedure by themselves, and 62.5% of those students that only had the conventional method reported having regular capacity of technique assimilation. In both groups mentioned having regular difficulty, but those who have not seen the video had more difficulty in performing the technique. Conclusion: The traditional method of teaching associated with the video favored the ability to understand and transmitted safety, particularly because it is activity that requires technical skill. The technique with video visualization motivated and arouse interest, facilitated the understanding and memorization of the steps for procedure implementation, benefiting the

  12. The Development of Audio-Visual Integration for Temporal Judgements

    PubMed Central

    Adams, Wendy J.

    2016-01-01

    Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4–11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality—they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or

  13. Audiovisual correspondence between musical timbre and visual shapes.

    PubMed

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre. PMID:24910604

  14. Homebound Learning Opportunities: Reaching Out to Older Shut-ins and Their Caregivers.

    ERIC Educational Resources Information Center

    Penning, Margaret; Wasyliw, Douglas

    1992-01-01

    Describes Homebound Learning Opportunities, innovative health promotion and educational outreach service for homebound older adults and their caregivers. Notes that program provides over 125 topics for individualized learning programs delivered to participants in homes, audiovisual lending library, educational television programing, and peer…

  15. Educational Technology: A Confidence Builder and Problem Solver in Learning English at Standard I

    ERIC Educational Resources Information Center

    Singaravelu, G.

    2007-01-01

    The present study investigates the learning impediments of the learners in English at Standard I and found the effectiveness of AVRM approach (Audio-Visual, Relia and Multimedia) to eliminate the learning impediments in English. Twenty-five subjects were taken for the study. Oral test was administered for the study. AVRM approach was found to be…

  16. A Methodological Approach to Support Collaborative Media Creation in an E-Learning Higher Education Context

    ERIC Educational Resources Information Center

    Ornellas, Adriana; Muñoz Carril, Pablo César

    2014-01-01

    This article outlines a methodological approach to the creation, production and dissemination of online collaborative audio-visual projects, using new social learning technologies and open-source video tools, which can be applied to any e-learning environment in higher education. The methodology was developed and used to design a course in the…

  17. Student-Centred Learning: Toolkit for Students, Staff and Higher Education Institutions

    ERIC Educational Resources Information Center

    Attard, Angele; Di Iorio, Emma; Geven, Koen; Santa, Robert

    2010-01-01

    This Toolkit forms part of the project entitled "Time for a New Paradigm in Education: Student-Centred Learning" (T4SCL), jointly led by the European Students' Union (ESU) and Education International (EI). This is an EU-funded project under the Lifelong Learning Programme (LLP) administered by the Education, Audiovisual and Culture Executive…

  18. Individually-Paced Learning in Civil Engineering Technology: An Approach to Mastery.

    ERIC Educational Resources Information Center

    Sharples, D. Kent; And Others

    An individually-paced, open-entry/open-ended mastery learning approach for a state-wide civil engineering technology curriculum was developed, field-tested, and evaluated. Learning modules relying heavily on audiovisuals and hands-on experience, and based on 163 identified competencies, were developed for 11 courses in the curriculum. Written…

  19. Educational needs and preferred methods of learning among Florida practitioners who order genetic testing for hereditary breast and ovarian cancer.

    PubMed

    Cragun, Deborah; Besharat, Andrea Doty; Lewis, Courtney; Vadaparampil, Susan T; Pal, Tuya

    2013-12-01

    With the expansion of genetic testing options due to tremendous advances in sequencing technologies, testing will increasingly be offered by a variety of healthcare providers in diverse settings, as has been observed with BRCA1 and BRCA2 (BRCA) gene testing over the last decade. In an effort to assess the educational needs and preferences of healthcare providers primarily in a community-based setting, we mailed a survey to healthcare providers across Florida who order BRCA testing. Within the packet, a supplemental card was included to give participants the opportunity to request free clinical educational resources from the investigative team. Of 81 eligible providers who completed the survey, most were physicians or nurse practitioners; and over 90 % worked in a community or private practice setting. Respondents provided BRCA testing services for a median of 5 years, but the majority (56 %) reported no formal training in clinical cancer genetics. Most respondents (95 %) expressed interest in formal training opportunities, with 3-day in-person weekend training representing the most highly preferred format. The most widely selected facilitators to participation were minimal requirement to take time off work and continuing education credits. Overall, 64 % of respondents requested free clinical educational resources. Preferences for informal education included written materials and in-person presentations; whereas accessing a DVD or website were less popular. Findings from our study highlight both the need for and interest in ongoing educational opportunities and resources among community providers who order BRCA testing. These results can be used to enhance participation of community-based providers in educational training programs by targeting educational resources to the most preferred format.

  20. Project Report ECLIPSE: European Citizenship Learning Program for Secondary Education

    ERIC Educational Resources Information Center

    Bombardelli, Olga

    2014-01-01

    This paper reports on a European project, the Comenius ECLIPSE project (European Citizenship Learning in a Programme for Secondary Education) developed by six European partners coordinated by the University of Trento in the years 2011-2014. ECLIPSE (co-financed by the EACEA--Education, Audiovisual and Culture Executive Agency) aims at developing,…

  1. "Managed" Learning. An Institute of Higher Education Occasional Paper.

    ERIC Educational Resources Information Center

    Phillips, Herbert E.

    A form of individualized instruction called "managed" learning is proposed. Three personnel types would be required in such a system: (1) coordinators responsible for providing some education to assigned students and "managing" the rest of the student's education; (2) general service personnel such as guidance, library, audio-visual, instructional…

  2. Teaching and Learning with Hypervideo in Vocational Education and Training

    ERIC Educational Resources Information Center

    Cattaneo, Alberto A. P.; Nguyen, Anh Thu; Aprea, Carmela

    2016-01-01

    Audiovisuals offer increasing opportunities as teaching-and-learning materials while also confronting educators with significant challenges. Hypervideo provides one means of overcoming these challenges, offering new possibilities for interaction and support for reflective processes. However, few studies have investigated the instructional…

  3. When sounds become actions: higher-order representation of newly learned action sounds in the human motor system.

    PubMed

    Ticini, Luca F; Schütz-Bosbach, Simone; Weiss, Carmen; Casile, Antonino; Waszak, Florian

    2012-02-01

    In the absence of visual information, our brain is able to recognize the actions of others by representing their sounds as a motor event. Previous studies have provided evidence for a somatotopic activation of the listener's motor cortex during perception of the sound of highly familiar motor acts. The present experiments studied (a) how the motor system is activated by action-related sounds that are newly acquired and (b) whether these sounds are represented with reference to extrinsic features related to action goals rather than with respect to lower-level intrinsic parameters related to the specific movements. TMS was used to measure the correspondence between auditory and motor codes in the listener's motor system. We compared the corticomotor excitability in response to the presentation of auditory stimuli void of previous motor meaning before and after a short training period in which these stimuli were associated with voluntary actions. Novel cross-modal representations became manifest very rapidly. By disentangling the representation of the muscle from that of the action's goal, we further showed that passive listening to newly learnt action-related sounds activated a precise motor representation that depended on the variable contexts to which the individual was exposed during testing. Our results suggest that the human brain embodies a higher-order audio-visuo-motor representation of perceived actions, which is muscle-independent and corresponds to the goals of the action.

  4. An analysis of factors influencing complex water maze learning in rats: effects of task complexity, path order and escape assistance on performance following prenatal exposure to phenytoin.

    PubMed

    Vorhees, C V; Weisenburger, W P; Acuff-Smith, K D; Minck, D R

    1991-01-01

    Three hypotheses on factors determining performance in a complex water maze were tested in rats prenatally exposed to phenytoin. The hypotheses were: 1) that increasing maze complexity would better differentiate experimental effects; in particular, that an expanded version of a maze originally described by Biel would better differentiate groups than Biel's original design; 2) that path order is an important factor determining performance; specifically, that path sequence AB would better differentiate experiments from controls than the opposite order (sequence BA); and 3) that repeated trial failures interfere with learning, a problem putatively prevented by employing assisted (i.e., guided) escape. The specific prediction was that rats tested with assisted escape would learn faster and produce better group differentiation than rats tested with unassisted escape. Pregnant female Sprague-Dawley CD rats were gavaged on days 7-18 of gestation with propylene glycol alone (Control) or containing 100 or 200 mg/kg of phenytoin. Straight channel swimming trials followed by maze trials were begun on separate male/female offspring pairs from each litter on postnatal days 50, 70, or 90. The results confirmed hypothesis 1, i.e., the more complex maze better differentiated phenytoin-related group differences. This was true regardless of whether the phenytoin rats exhibiting circling were included in the analyses or not. The results disconfirmed hypothesis 2, i.e., that path order AB would better differentiate the groups than path order BA. Rather, the data supported the alternate hypothesis, that path order was not a significant determinant of prenatal drug-related maze deficits. This was unchanged regardless of whether phenytoin offspring exhibiting circling were or were not included in the analyses. The implication is that path B alone was sufficient to detect phenytoin's effects on maze performance. Finally, the overall results disconfirmed hypothesis 3, i.e., assisted escape

  5. STUDIES RELATED TO THE DESIGN OF AUDIOVISUAL TEACHING MATERIALS.

    ERIC Educational Resources Information Center

    TRAVERS, ROBERT M.W.

    AN INFORMATION TRANSMISSION MODEL THAT ADVOCATES LEARNING VIA ONLY ONE SENSE MODALITY (E.G. VISUAL) IS THE BASIS FOR SEVERAL SERIES OF EXPERIMENTS, EACH SUBJECTED TO RIGOROUS STATISTICAL ANALYSIS. CONCLUSIONS ARE--LEARNING IS NOT FACILITATED BY REDUNDANT INFORMATION PRESENTED SIMULTANEOUSLY THROUGH THE AUDITORY AND VISUAL SENSE MODALITIES, IT IS…

  6. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    PubMed

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials. PMID:25914939

  7. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    PubMed

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  8. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music.

    PubMed

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech. PMID:25147539

  9. SURVEY OF AUDIO-VISUAL EDUCATION IN HAWAII--ITS STATUS AND NEEDS.

    ERIC Educational Resources Information Center

    SCHULLER, CHARLES F.; AND OTHERS

    THE PURPOSES OF THE SURVEY WERE (1) TO MAKE AN OBJECTIVE ANALYSIS OF THE AUDIOVISUAL INSTRUCTION NEEDS OF THE PUBLIC EDUCATIONAL SYSTEMS OF THE STATE OF HAWAII, AND (2) TO MAKE SPECIFIC RECOMMENDATIONS AND SUGGESTIONS FOR SHORT AND LONG RANGE IMPROVEMENTS WHERE NEEDED. TOP PRIORITY RECOMMENDATIONS ARE RECORDED, INCLUDING SUGGESTED ALLOCATIONS OF…

  10. Audiovisual Material as Educational Innovation Strategy to Reduce Anxiety Response in Students of Human Anatomy

    ERIC Educational Resources Information Center

    Casado, Maria Isabel; Castano, Gloria; Arraez-Aybar, Luis Alfonso

    2012-01-01

    This study presents the design, effect and utility of using audiovisual material containing real images of dissected human cadavers as an innovative educational strategy (IES) in the teaching of Human Anatomy. The goal is to familiarize students with the practice of dissection and to transmit the importance and necessity of this discipline, while…

  11. Non-Commercial Audiovisual Instructional Materials in Japan. AVE in Japan No. 24.

    ERIC Educational Resources Information Center

    Takakuwa, Yasuo

    This report outlines the history of non-commercial and local production of audiovisual instructional materials in Japan since World War II, discusses current trends in instructional materials usage, and presents four case studies of materials production at the prefectural level. Topics addressed include: (1) materials production prior to the…

  12. Training Methodology. Part 4: Audiovisual Theory, Aids and Equipment; an Annotated Bibliography.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Bethesda, MD.

    A collection of 332 abstracts, resumes, and annotations of selected documents on audiovisual theory and methods, aids, facilities, and equipment, this publication is the fourth part of a bibliography on training methodology within a larger series on mental health inservice training and training methodology. Television instruction and equipment,…

  13. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music

    PubMed Central

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech. PMID:25147539

  14. Audiovisual Education in Primary Schools: A Curriculum Project in the Netherlands.

    ERIC Educational Resources Information Center

    Ketzer, Jan W.

    Audiovisual, or mass media education can play a significant role in children's social, emotional, cognitive, sensory, motor, and creative development. It includes all school activities which teach children to interact with The field includes all school activities which teach children to interact with visualize ideas. Students can be involved in…

  15. Selective Review of the Results of Research on the Use of Audiovisual Media to Teach Adults.

    ERIC Educational Resources Information Center

    Campeau, Peggie L.

    The purpose of this literature review was to summarize results of experimental studies on the instructional effectiveness of audiovisual media in post-secondary education. Studies which met seven major screening criteria were used. A study was generally accepted if it compared performance of experimental and control groups on objective measures of…

  16. Audiovisual Records in the National Archives Relating to Black History. Preliminary Draft.

    ERIC Educational Resources Information Center

    Waffen, Leslie; And Others

    A representative selection of the National Archives and Records Services' audiovisual collection relating to black history is presented. The intention is not to provide an exhaustive survey, but rather to indicate the breadth and scope of materials available for study and to suggest areas for concentrated research. The materials include sound…

  17. A Reference List of Audiovisual Materials Produced by the United States Government, 1978.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    This reference guide describes over 6,000 audiovisual materials selected from more than 10,000 programs produced by 175 federal agencies covering a wide range of subjects, including medicine, dentistry, allied health, education, science, social studies, industrial technology/training, safety, and the environmental sciences. Materials are cited by…

  18. Talker and Lexical Effects on Audiovisual Word Recognition by Adults with Cochlear Implants.

    ERIC Educational Resources Information Center

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2003-01-01

    A study examined how 20 adults with postlingual deafness with cochlear implants combined visual information from lip reading with auditory cues in an open-set word recognition task. Word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation, and for single-talker…

  19. Storage, Handling and Preservation of Audiovisual Materials. AV in Action 3.

    ERIC Educational Resources Information Center

    Thompson, Anthony Hugh

    Designed to provide the librarian with suggestions and guidelines for storing and preserving audiovisual materials, this pamphlet is divided into four major chapters: (1) Normal Use Storage Conditions; (2) Natural Lifetime, Working Lifetime and Long-Term Storage; (3) Handling; and (4) Shelving of Normal Use Materials. Topics addressed include:…

  20. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    ERIC Educational Resources Information Center

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  1. Sciences: A Select List of U.S. Government Produced Audiovisual Materials - 1978.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    This publication is a catalog that contains the National Audiovisual Center's materials on Science. There are twelve areas in this catalog: Aerospace Technology, Astronomy, Biology, Chemistry, Electronics and Electricity, Energy, Environmental Studies, Geology, Mathematics and Computer Science, Oceanography, Physics, and Weather/Meteorology. Each…

  2. Teacher's Guide to Aviation Education Resources. Including: Career Information, Audiovisuals, Publications, Periodicals.

    ERIC Educational Resources Information Center

    Federal Aviation Administration (DOT), Washington, DC. Office of Public Affairs.

    Currently available aviation education resource materials are listed alphabetically by title under four headings: (1) career information; (2) audiovisual materials; (3) publications; and (4) periodicals. Each entry includes: title; format (16mm film, slides, slide/tape presentation, VHS/Beta videotape, book, booklet, newsletter, pamphlet, poster,…

  3. Guide to Recommended Curriculum and Audio-Visual Materials for Teaching about Japan.

    ERIC Educational Resources Information Center

    Wojtan, Linda S.

    Designed to assist teachers in planning units on Japan, this guide identifies widely available instructional materials that have been found to be effective. The guide is organized in six sections. Sections 1-4 list recommended curriculum and audiovisual materials for elementary and secondary classrooms with purchase and/or rental information…

  4. Comparisons of Audio and Audiovisual Measures of Stuttering Frequency and Severity in Preschool-Age Children

    ERIC Educational Resources Information Center

    Rousseau, Isabelle; Onslow, Mark; Packman, Ann; Jones, Mark

    2008-01-01

    Purpose: To determine whether measures of stuttering frequency and measures of overall stuttering severity in preschoolers differ when made from audio-only recordings compared with audiovisual recordings. Method: Four blinded speech-language pathologists who had extensive experience with preschoolers who stutter measured stuttering frequency and…

  5. AUDIOVISUAL MEDIA IN THE PUBLIC SCHOOLS, 1961-64--A PROFILE OF CHANGE.

    ERIC Educational Resources Information Center

    GODFREY, ELEANOR P.; AND OTHERS

    A FOLLOWUP SURVEY WAS MADE OF 238 SCHOOL DISTRICTS OF VARIOUS SIZES ACROSS THE UNITED STATES TO SURVEY CHANGES IN THE USE OF AUDIOVISUAL RESOURCES OVER AN INTERVENING 3-YEAR PERIOD. THE SURVEY EXAMINED NOT ONLY THE EXTENT AND DURATION OF CHANGE, BUT ALSO THE IMPACT OF VARIOUS SCHOOL DISTRICT CHARACTERISTICS ON CHANGE. FROM QUESTIONNAIRES ADDRESSED…

  6. An Exploration of Why Preschoolers Perform Differently than Do Adults in Audiovisual Speech Perception Tasks.

    ERIC Educational Resources Information Center

    Desjardins, Renee N.; And Others

    1997-01-01

    Examined whether experience correctly producing consonants plays role in developing underlying representation which mediates perception of visible speech. Tested preschoolers (divided by their making of substitution errors) and adults in auditory-only, visual-only, and audiovisual conditions. Found children overall showed less visual influence and…

  7. Audio-Visual Resources Related to Family Interaction with a Handicapped Member. An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Collins, Eleanor M.; Watts, Janine A.

    This annotated bibliography lists 78 selected audiovisual resources (film, filmstrip, or video formats) in the general topic area of family interaction with a handicapped member. Sources older than 1970 are not included. Each listing provides information on length, format, audience, method of acquisition (rent or purchase, and price), source, and…

  8. QUANTITATIVE STANDARDS FOR AUDIOVISUAL PERSONNEL, EQUIPMENT AND MATERIALS (IN ELEMENTARY, SECONDARY, AND HIGHER EDUCATION).

    ERIC Educational Resources Information Center

    COBUN, TED; AND OTHERS

    THIS DOCUMENT IS A STAGE IN A STUDY TO FORMULATE QUANTITATIVE GUIDELINES FOR THE AUDIO-VISUAL COMMUNICATIONS FIELD, BEING CONDUCTED BY DOCTORS GENE FARIS AND MENDEL SHERMAN UNDER A NATIONAL DEFENSE EDUCATION ACT CONTRACT. THE STANDARDS LISTED HERE HAVE BEEN OFFICIALLY APPROVED AND ADOPTED BY SEVERAL AGENCIES, INCLUDING THE DEPARTMENT OF…

  9. Audiovisuals for Nutrition Education; Selected Evaluative Reviews from the Journal of Nutrition Education.

    ERIC Educational Resources Information Center

    Rowe, Sue Ellen, Comp.

    Audiovisual materials suitable for the teaching of nutrition are listed. Materials include coloring books, flannelboard stories, games, kits, audiotapes, records, charts, posters, study prints, films, videotapes, filmstrips, slides, and transparencies. Each entry contains bibliographic data, educational level, price and evaluation. Mateiral is…

  10. Audiovisuals for Nutrition Education. Nutrition Education Resource Series No. 9. Revised Edition.

    ERIC Educational Resources Information Center

    National Nutrition Education Clearing House, Berkeley, CA.

    This bibliography contains reviews of more than 250 audiovisual materials in eight subject areas related to nutrition: (1) general nutrition; (2) life cycle; (3) diet/health and disease; (4) health and athletics; (5) food - general; (6) food preparation and service; (7) food habits and preferences; and (8) food economics and concerns. Materials…

  11. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or... records (e.g., for digital files, use file naming conventions), that clarify connections between related... audio recordings), and that associate records with the relevant creating, sponsoring, or...

  12. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or... records (e.g., for digital files, use file naming conventions), that clarify connections between related... audio recordings), and that associate records with the relevant creating, sponsoring, or...

  13. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    PubMed

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  14. Effects of Audio-Visual Information on the Intelligibility of Alaryngeal Speech

    ERIC Educational Resources Information Center

    Evitts, Paul M.; Portugal, Lindsay; Van Dine, Ami; Holler, Aline

    2010-01-01

    Background: There is minimal research on the contribution of visual information on speech intelligibility for individuals with a laryngectomy (IWL). Aims: The purpose of this project was to determine the effects of mode of presentation (audio-only, audio-visual) on alaryngeal speech intelligibility. Method: Twenty-three naive listeners were…

  15. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    ERIC Educational Resources Information Center

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  16. Psychophysics of the McGurk and Other Audiovisual Speech Integration Effects

    ERIC Educational Resources Information Center

    Jiang, Jintao; Bernstein, Lynne E.

    2011-01-01

    When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called "McGurk effect"), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for…

  17. Audiovisual Translation and Assistive Technology: Towards a Universal Design Approach for Online Education

    ERIC Educational Resources Information Center

    Patiniotaki, Emmanouela

    2016-01-01

    Audiovisual Translation (AVT) and Assistive Technology (AST) are two fields that share common grounds within accessibility-related research, yet they are rarely studied in combination. The reason most often lies in the fact that they have emerged from different disciplines, i.e. Translation Studies and Computer Science, making a possible combined…

  18. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  19. Comparing Infants' Preference for Correlated Audiovisual Speech with Signal-Level Computational Models

    ERIC Educational Resources Information Center

    Hollich, George; Prince, Christopher G.

    2009-01-01

    How much of infant behaviour can be accounted for by signal-level analyses of stimuli? The current paper directly compares the moment-by-moment behaviour of 8-month-old infants in an audiovisual preferential looking task with that of several computational models that use the same video stimuli as presented to the infants. One type of model…

  20. Hearing impairment and audiovisual speech integration ability: a case study report.

    PubMed

    Altieri, Nicholas; Hudock, Daniel

    2014-01-01

    Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listener's ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT) as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: an open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.