Science.gov

Sample records for order audiovisual learning

  1. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  2. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  3. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  4. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    PubMed Central

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning. PMID:26778999

  5. The Role of Audiovisual Mass Media News in Language Learning

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  6. Bayesian Calibration of Simultaneity in Audiovisual Temporal Order Judgments

    PubMed Central

    Yamamoto, Shinya; Miyazaki, Makoto; Iwano, Takayuki; Kitazawa, Shigeru

    2012-01-01

    After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation). In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to Bayesian integration theory (Bayesian calibration). We further showed, in theory, that the effect of Bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that Bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone) in a different block, the point of simultaneity shifted to “sound-first” for the pitch associated with sound-first stimuli, and to “light-first” for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to “light-first” for the pitch associated with sound-first stimuli, and to “sound-first” for the pitch associated with light-first stimuli. The results clearly show that Bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli. PMID:22792297

  7. Audiovisual Resources.

    ERIC Educational Resources Information Center

    Beasley, Augie E.; And Others

    1986-01-01

    Six articles on the use of audiovisual materials in the school library media center cover how to develop an audiovisual production center; audiovisual forms; a checklist for effective video/16mm use in the classroom; slides in learning; hazards of videotaping in the library; and putting audiovisuals on the shelf. (EJS)

  8. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    ERIC Educational Resources Information Center

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  9. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    ERIC Educational Resources Information Center

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  10. Effect on Intended and Incidental Learning from the Use of Learning Objectives with an Audiovisual Presentation.

    ERIC Educational Resources Information Center

    Main, Robert

    This paper reports a controlled field experiment conducted to determine the effects and interaction of five independent variables with an audiovisual slide-tape program: presence of learning objectives, location of learning objectives, type of knowledge, sex of learner, and retention of learning. Participants were university students in a general…

  11. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. PMID:27131076

  12. Something for Everyone? An Evaluation of the Use of Audio-Visual Resources in Geographical Learning in the UK.

    ERIC Educational Resources Information Center

    McKendrick, John H.; Bowden, Annabel

    1999-01-01

    Reports from a survey of geographers that canvassed experiences using audio-visual resources to support teaching. Suggests that geographical learning has embraced audio-visual resources and that they are employed effectively. Concludes that integration of audio-visual resources into mainstream curriculum is essential to ensure effective and…

  13. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    ERIC Educational Resources Information Center

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  14. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    ERIC Educational Resources Information Center

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  15. Audiovisuals and non-print learning resources in a health sciences library.

    PubMed

    Robinow, B H

    1979-03-01

    The MD undergraduate program at McMaster University, based entirely on self-instruction, requires the provision of all kinds of learning resources. How these are assembled and made available is described. Emphasis is placed on the practical library problems of cataloging, shelving, maintenance, and distribution of audiovisual materials including pathology specimens and 'problem boxes' as well as the more usual films, videotapes and slide/tape sets. Evaluation is discussed briefly. PMID:85624

  16. Influence of Audio-Visual Presentations on Learning Abstract Concepts.

    ERIC Educational Resources Information Center

    Lai, Shu-Ling

    2000-01-01

    Describes a study of college students that investigated whether various types of visual illustrations influenced abstract concept learning when combined with audio instruction. Discusses results of analysis of variance and pretest posttest scores in relation to learning performance, attitudes toward the computer-based program, and differences in…

  17. Isotropic sequence order learning.

    PubMed

    Porr, Bernd; Wörgötter, Florentin

    2003-04-01

    In this article, we present an isotropic unsupervised algorithm for temporal sequence learning. No special reward signal is used such that all inputs are completely isotropic. All input signals are bandpass filtered before converging onto a linear output neuron. All synaptic weights change according to the correlation of bandpass-filtered inputs with the derivative of the output. We investigate the algorithm in an open- and a closed-loop condition, the latter being defined by embedding the learning system into a behavioral feedback loop. In the open-loop condition, we find that the linear structure of the algorithm allows analytically calculating the shape of the weight change, which is strictly heterosynaptic and follows the shape of the weight change curves found in spike-time-dependent plasticity. Furthermore, we show that synaptic weights stabilize automatically when no more temporal differences exist between the inputs without additional normalizing measures. In the second part of this study, the algorithm is is placed in an environment that leads to closed sensor-motor loop. To this end, a robot is programmed with a prewired retraction reflex reaction in response to collisions. Through isotropic sequence order (ISO) learning, the robot achieves collision avoidance by learning the correlation between his early range-finder signals and the later occurring collision signal. Synaptic weights stabilize at the end of learning as theoretically predicted. Finally, we discuss the relation of ISO learning with other drive reinforcement models and with the commonly used temporal difference learning algorithm. This study is followed up by a mathematical analysis of the closed-loop situation in the companion article in this issue, "ISO Learning Approximates a Solution to the Inverse-Controller Problem in an Unsupervised Behavioral Paradigm" (pp. 865-884). PMID:12689389

  18. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    PubMed

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes. PMID:23893940

  19. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  20. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    PubMed

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience. PMID:26834129

  1. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    PubMed

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training. PMID:26671710

  2. Problem Order Implications for Learning

    ERIC Educational Resources Information Center

    Li, Nan; Cohen, William W.; Koedinger, Kenneth R.

    2013-01-01

    The order of problems presented to students is an important variable that affects learning effectiveness. Previous studies have shown that solving problems in a blocked order, in which all problems of one type are completed before the student is switched to the next problem type, results in less effective performance than does solving the problems…

  3. Arousal and Reminiscence in Learning From Color and Black/White Audio-Visual Presentations.

    ERIC Educational Resources Information Center

    Farley, Frank H.; Grant, Alfred D.

    Reminiscence, or an increase in retention scores from a short-to-long-term retention test, has been shown in some previous work to be a significant function of arousal. Previous studies of the effects of color versus black-and-white audiovisual presentations have generally used film or television and have found no facilitating effect of color on…

  4. Audiovisual Script Writing.

    ERIC Educational Resources Information Center

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  5. Lecture Hall and Learning Design: A Survey of Variables, Parameters, Criteria and Interrelationships for Audio-Visual Presentation Systems and Audience Reception.

    ERIC Educational Resources Information Center

    Justin, J. Karl

    Variables and parameters affecting architectural planning and audiovisual systems selection for lecture halls and other learning spaces are surveyed. Interrelationships of factors are discussed, including--(1) design requirements for modern educational techniques as differentiated from cinema, theater or auditorium design, (2) general hall…

  6. Adult Learning Strategies and Approaches (ALSA). Resources for Teachers of Adults. A Handbook of Practical Advice on Audio-Visual Aids and Educational Technology for Tutors and Organisers.

    ERIC Educational Resources Information Center

    Cummins, John; And Others

    This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…

  7. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  8. Learning one-to-many mapping functions for audio-visual integrated perception

    NASA Astrophysics Data System (ADS)

    Lim, Jung-Hui; Oh, Do-Kwan; Lee, Soo-Young

    2010-04-01

    In noisy environment the human speech perception utilizes visual lip-reading as well as audio phonetic classification. This audio-visual integration may be done by combining the two sensory features at the early stage. Also, the top-down attention may integrate the two modalities. For the sensory feature fusion we introduce mapping functions between the audio and visual manifolds. Especially, we present an algorithm to provide one-to-many mapping function for the videoto- audio mapping. The top-down attention is also presented to integrate both the sensory features and classification results of both modalities, which is able to explain McGurk effect. Each classifier is separately implemented by the Hidden-Markov Model (HMM), but the two classifiers are combined at the top level and interact by the top-down attention.

  9. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  10. Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds.

    PubMed

    Jesse, Alexandra; Johnson, Elizabeth K

    2016-05-01

    Analyses of caregiver-child communication suggest that an adult tends to highlight objects in a child's visual scene by moving them in a manner that is temporally aligned with the adult's speech productions. Here, we used the looking-while-listening paradigm to examine whether 25-month-olds use audiovisual temporal alignment to disambiguate and learn novel word-referent mappings in a difficult word-learning task. Videos of two equally interesting and animated novel objects were simultaneously presented to children, but the movement of only one of the objects was aligned with an accompanying object-labeling audio track. No social cues (e.g., pointing, eye gaze, touch) were available to the children because the speaker was edited out of the videos. Immediately afterward, toddlers were presented with still images of the two objects and asked to look at one or the other. Toddlers looked reliably longer to the labeled object, demonstrating their acquisition of the novel word-referent mapping. A control condition showed that children's performance was not solely due to the single unambiguous labeling that had occurred at experiment onset. We conclude that the temporal link between a speaker's utterances and the motion they imposed on the referent object helps toddlers to deduce a speaker's intended reference in a difficult word-learning scenario. In combination with our previous work, these findings suggest that intersensory redundancy is a source of information used by language users of all ages. That is, intersensory redundancy is not just a word-learning tool used by young infants. PMID:26765249

  11. Enhanced Multisensory Integration and Motor Reactivation after Active Motor Learning of Audiovisual Associations

    ERIC Educational Resources Information Center

    Butler, Andrew J.; James, Thomas W.; James, Karin Harman

    2011-01-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…

  12. Audiovisual Interaction

    NASA Astrophysics Data System (ADS)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  13. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  14. Learning with Hyperlinked Videos--Design Criteria and Efficient Strategies for Using Audiovisual Hypermedia

    ERIC Educational Resources Information Center

    Zahn, Carmen; Barquero, Beatriz; Schwan, Stephan

    2004-01-01

    In this article, we discuss the results of an experiment in which we studied two apparently conflicting classes of design principles for instructional hypervideos: (1) those principles derived from work on multimedia learning that emphasize spatio-temporal contiguity and (2) those originating from work on hypermedia learning that favour…

  15. Effects of Audiovisual Stimuli on Learning through Microcomputer-Based Class Presentation.

    ERIC Educational Resources Information Center

    Hativa, Nira; Reingold, Aliza

    1987-01-01

    Effectiveness of two versions of computer software used as an electronic blackboard to present geometric concepts to ninth grade students was compared. The experimental version incorporated color, animation, and nonverbal sounds as stimuli; the no-stimulus version was monochrome. Both immediate and delayed learning were significantly better for…

  16. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. PMID:27003546

  17. Virtual Attendance: Analysis of an Audiovisual over IP System for Distance Learning in the Spanish Open University (UNED)

    ERIC Educational Resources Information Center

    Vazquez-Cano, Esteban; Fombona, Javier; Fernandez, Alberto

    2013-01-01

    This article analyzes a system of virtual attendance, called "AVIP" (AudioVisual over Internet Protocol), at the Spanish Open University (UNED) in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the…

  18. Manifold Learning by Preserving Distance Orders

    PubMed Central

    Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-01-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis. PMID:25045195

  19. The effect of task order predictability in audio-visual dual task performance: Just a central capacity limitation?

    PubMed Central

    Töllner, Thomas; Strobach, Tilo; Schubert, Torsten; Müller, Hermann J.

    2012-01-01

    In classic Psychological-Refractory-Period (PRP) dual-task paradigms, decreasing stimulus onset asynchronies (SOA) between the two tasks typically lead to increasing reaction times (RT) to the second task and, when task order is non-predictable, to prolonged RTs to the first task. Traditionally, both RT effects have been advocated to originate exclusively from the dynamics of a central bottleneck. By focusing on two specific electroencephalographic brain responses directly linkable to perceptual or motor processing stages, respectively, the present study aimed to provide a more detailed picture as to the origin(s) of these behavioral PRP effects. In particular, we employed 2-alternative forced-choice (2AFC) tasks requiring participants to identify the pitch of a tone (high versus low) in the auditory, and the orientation of a target object (vertical versus horizontal) in the visual, task, with task order being either predictable or non-predictable. Our findings show that task order predictability (TOP) and inter-task SOA interactively determine the speed of (visual) perceptual processes (as indexed by the PCN timing) for both the first and the second task. By contrast, motor response execution times (as indexed by the LRP timing) are influenced independently by TOP for the first, and SOA for the second, task. Overall, this set of findings complements classical as well as advanced versions of the central bottleneck model by providing electrophysiological evidence for modulations of both perceptual and motor processing dynamics that, in summation with central capacity limitations, give rise to the behavioral PRP outcome. PMID:22973208

  20. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  1. Audiovisual Mass Media and Education. TTW 27/28.

    ERIC Educational Resources Information Center

    van Stapele, Peter, Ed.; Sutton, Clifford C., Ed.

    1989-01-01

    The 15 articles in this special issue focus on learning about the audiovisual mass media and education, especially television and film, in relation to various pedagogical and didactical questions. Individual articles are: (1) "Audiovisual Mass Media for Education in Pakistan: Problems and Prospects" (Ahmed Noor Kahn); (2) "The Role of the…

  2. Evaluating audio-visual and computer programs for classroom use.

    PubMed

    Van Ort, S

    1989-01-01

    Appropriate faculty decisions regarding adoption of audiovisual and computer programs are critical to the classroom use of these learning materials. The author describes the decision-making process in one college of nursing and the adaptation of an evaluation tool for use by faculty in reviewing audiovisual and computer programs. PMID:2467237

  3. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  4. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  5. THE COST OF AUDIOVISUAL INSTRUCTION.

    ERIC Educational Resources Information Center

    1964

    A REPORT OF A SURVEY ON THE COST OF AUDIOVISUAL INSTRUCTION IN THE NATION'S PUBLIC ELEMENTARY AND SECONDARY SCHOOLS DURING 1962-63 AND 1963-64 WAS PRESENTED. INCLUDED WERE THE TOTAL EXPENDITURES FOR AUDIOVISUAL INSTRUCTION AND SPECIFIC EXPENDITURES FOR AUDIOVISUAL SALARIES, AUDIOVISUAL EQUIPMENT, AND FILM RENTALS. MEDIANS WERE COMPUTED FOR (1) THE…

  6. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    ERIC Educational Resources Information Center

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  7. Audio/Visual Ratios in Commercial Filmstrips.

    ERIC Educational Resources Information Center

    Gulliford, Nancy L.

    Developed by the Westinghouse Electric Corporation, Video Audio Compressed (VIDAC) is a compressed time, variable rate, still picture television system. This technology made it possible for a centralized library of audiovisual materials to be transmitted over a television channel in very short periods of time. In order to establish specifications…

  8. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans. PMID:27010716

  9. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  10. Selected Mental Health Audiovisuals.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  11. Audiovisual Materials in Mathematics.

    ERIC Educational Resources Information Center

    Raab, Joseph A.

    This pamphlet lists five thousand current, readily available audiovisual materials in mathematics. These are grouped under eighteen subject areas: Advanced Calculus, Algebra, Arithmetic, Business, Calculus, Charts, Computers, Geometry, Limits, Logarithms, Logic, Number Theory, Probability, Soild Geometry, Slide Rule, Statistics, Topology, and…

  12. Audiovisual Techniques Handbook.

    ERIC Educational Resources Information Center

    Hess, Darrel

    This handbook focuses on the use of 35mm slides for audiovisual presentations, particularly as an alternative to the more expensive and harder to produce medium of video. Its point of reference is creating slide shows about experiences in the Peace Corps; however, recommendations offered about both basic production procedures and enhancements are…

  13. AUDIOVISUAL SERVICES CATALOG.

    ERIC Educational Resources Information Center

    Stockton Unified School District, CA.

    A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…

  14. Promoting Higher Order Thinking Skills Using Inquiry-Based Learning

    ERIC Educational Resources Information Center

    Madhuri, G. V.; Kantamreddi, V. S. S. N; Prakash Goteti, L. N. S.

    2012-01-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in…

  15. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  16. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  17. Learned audio-visual cross-modal associations in observed piano playing activate the left planum temporale. An fMRI study.

    PubMed

    Hasegawa, Takehiro; Matsuki, Ken-Ichi; Ueno, Takashi; Maeda, Yasuhiro; Matsue, Yoshihiko; Konishi, Yukuo; Sadato, Norihiro

    2004-08-01

    Lip reading is known to activate the planum temporale (PT), a brain region which may integrate visual and auditory information. To find out whether other types of learned audio-visual integration occur in the PT, we investigated "key-touch reading" using functional magnetic resonance imaging (fMRI). As well-trained pianists are able to identify pieces of music by watching the key-touching movements of the hands, we hypothesised that the visual information of observed sequential finger movements is transformed into the auditory modality during "key-touch reading" as is the case during lip reading. We therefore predicted activation of the PT during key-touch reading. Twenty-six healthy right-handed volunteers were recruited for fMRI. Of these, 7 subjects had never experienced piano training (naïve group), 10 had a little experience of piano playing (less trained group), and the remaining 9 had been trained for more than 8 years (well trained group). During task periods, subjects were required to view the bimanual hand movements of a piano player making key presses. During control periods, subjects viewed the same hands sliding from side to side without tapping movements of the fingers. No sound was provided. Sequences of key presses during task periods consisted of pieces of familiar music, unfamiliar music, or random sequences. Well-trained subjects were able to identify the familiar music, whereas less-trained subjects were not. The left PT of the well-trained subjects was equally activated by observation of familiar music, unfamiliar music, and random sequences. The naïve and less trained groups did not show activation of the left PT during any of the tasks. These results suggest that PT activation reflects a learned process. As the activation was elicited by viewing key pressing actions regardless of whether they constituted a piece of music, the PT may be involved in processes that occur prior to the identification of a piece of music, that is, mapping the

  18. Variable Affix Order: Grammar and Learning

    ERIC Educational Resources Information Center

    Ryan, Kevin M.

    2010-01-01

    While affix ordering often reflects general syntactic or semantic principles, it can also be arbitrary or variable. This article develops a theory of morpheme ordering based on local morphotactic restrictions encoded as weighted bigram constraints. I examine the formal properties of morphotactic systems, including arbitrariness, nontransitivity,…

  19. Time and Order Effects on Causal Learning

    ERIC Educational Resources Information Center

    Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.

    2006-01-01

    Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…

  20. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    ERIC Educational Resources Information Center

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  1. Evaluating an Experimental Audio-Visual Module Programmed to Teach a Basic Anatomical and Physiological System.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    The learning efficiency and effectiveness of teaching an anatomical and physiological system to Air Force enlisted trainees utilizing an experimental audiovisual programed module was compared to that of a commercial linear programed text. It was demonstrated that the audiovisual programed approach to training was more efficient than and equally as…

  2. The World of Audiovisual Education: Its Impact on Libraries and Librarians.

    ERIC Educational Resources Information Center

    Ely, Donald P.

    As the field of educational technology developed, the field of library science became increasingly concerned about audiovisual media. School libraries have made significant developments in integrating audiovisual media into traditional programs, and are becoming learning resource centers with a variety of media; academic and public libraries are…

  3. Rapid temporal recalibration is unique to audiovisual stimuli.

    PubMed

    Van der Burg, Erik; Orchard-Mills, Emily; Alais, David

    2015-01-01

    Following prolonged exposure to asynchronous multisensory signals, the brain adapts to reduce the perceived asynchrony. Here, in three separate experiments, participants performed a synchrony judgment task on audiovisual, audiotactile or visuotactile stimuli and we used inter-trial analyses to examine whether temporal recalibration occurs rapidly on the basis of a single asynchronous trial. Even though all combinations used the same subjects, task and design, temporal recalibration occurred for audiovisual stimuli (i.e., the point of subjective simultaneity depended on the preceding trial's modality order), but none occurred when the same auditory or visual event was combined with a tactile event. Contrary to findings from prolonged adaptation studies showing recalibration for all three combinations, we show that rapid, inter-trial recalibration is unique to audiovisual stimuli. We conclude that recalibration occurs at two different timescales for audiovisual stimuli (fast and slow), but only on a slow timescale for audiotactile and visuotactile stimuli. PMID:25200176

  4. [Cultural heritage and audiovisual creation in the Arab world].

    PubMed

    Aziza, M

    1979-01-01

    Audiovisual creation is facing in Arab countries problems arising from the use of imported techniques in order to reconstitute or transform their own reality. Arab audiovisual producers see this technique as an easy and efficient way to reproduce reality or construct conventionally an artificial universe. Sometimes, audiovisuals have an absolute suggestion power; sometimes, these techniques are faced with total incredulity. From a diffusion point of view, audiovisuals in the Arab world have a very specific status. The effects of television, studied by western researchers in their cultural environment, are not reproduced in the same fashion in the Arab cultural world. In the Arab world, the word very often still competes successfully with the picture, even after the appearance and adoption of mass media. Finally, one must mention a very interesting situation resulting from a linguistic phenomenon which is specific to the Arab world: the existence of 2 communication languages, one noble but little used, the other dialectical but popular. In all Arab countries, the News, the most political program, is broadcasted in the classical language, despite the danger of meaning distortion in the least educated public. The reason is probably that the classical Arab language enjoys a sacred status. Arab audiovisuals are facing several obstacles to their total and autonomous realization. The contribution of the Arab audiovisual producers is relatively modest, compared to some other areas of cultural creation. Arab film-making is looking more and more for the cooperation of contemporary writers. Contemporary literature is a considerable source for the renewal of Arab audiovisual expression. A relationship between film and popular cultural heritage could be very usefully established in both directions. Audiovisuals should treat popular cultural manifestations as a global social fact on several significant levels. PMID:12261391

  5. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  6. School Building Design and Audio-Visual Resources.

    ERIC Educational Resources Information Center

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  7. Improving physician practice efficiency by learning lab test ordering pattern.

    PubMed

    Cai, Peng; Cao, Feng; Ni, Yuan; Shen, Weijia; Zheng, Tao

    2013-01-01

    The system of electronic medical records (EMR) has been widely used in physician practice. In China, physicians have the time pressure to provide care to many patients in a short period. Improving practice efficiency is a promising direction to mitigate this predicament. During the encounter, ordering lab test is one of the most frequent actions in EMR system. In this paper, our motivation is to save physician's time by providing lab test ordering list to facilitate physician practice. To this end, we developed weight based multi-label classification framework to learn to order lab test for the current encounter according to the historical EMR. Particularly, we propose to learn the physician-specific lab test ordering pattern as different physicians may have different practice behavior on the same population. Experimental results on the real data set demonstrate that physician-specific models can outperform the baseline. PMID:23920762

  8. Spatial orienting in complex audiovisual environments.

    PubMed

    Nardo, Davide; Santangelo, Valerio; Macaluso, Emiliano

    2014-04-01

    Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context. This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings. We created short videos portraying everyday life situations that included a lateralised visual event and a co-occurring sound, either on the same or on the opposite side of space. Subjects viewed the videos with or without eye-movements allowed (overt or covert orienting). For each video, visual and auditory saliency maps were used to index the strength of stimulus-driven signals, and eye-movements were used as a measure of the efficacy of the audiovisual events for spatial orienting. Results showed that visual salience modulated activity in higher-order visual areas, whereas auditory salience modulated activity in the superior temporal cortex. Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence). Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience. These patterns of activation were comparable in overt and covert orienting conditions. Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus-driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality. PMID:23616340

  9. Promoting higher order thinking skills using inquiry-based learning

    NASA Astrophysics Data System (ADS)

    Madhuri, G. V.; S. S. N Kantamreddi, V.; Goteti, L. N. S. Prakash

    2012-05-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in chemistry. Laboratory exercises are designed based on Bloom's taxonomy and a just-in-time facilitation approach is used. A pre-laboratory discussion outlining the theory of the experiment and its relevance is carried out to enable the students to analyse real-life problems. The performance of the students is assessed based on their ability to perform the experiment, design new experiments and correlate practical utility of the course module with real life. The novelty of the present approach lies in the fact that the learning outcomes of the existing experiments are achieved through establishing a relationship with real-world problems.

  10. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  11. Multiple-Try Feedback and Higher-Order Learning Outcomes

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Koul, Ravinder

    2005-01-01

    Although feedback is an important component of computer-based instruction (CBI), the effects of feedback on higher-order learning outcomes are not well understood. Several meta-analyses provide two rules of thumb: any feedback is better than no feedback and feedback with more information is better than feedback with less information. …

  12. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. PMID:25269620

  13. [Second-order retrospective revaluation in human contingency learning].

    PubMed

    Numata, Keitaro; Shimazaki, Tsuneo

    2009-04-01

    We demonstrated second-order retrospective revaluation with three cues (T1, T2, and C) and an outcome, in human contingency learning. Experimental task, PC-controlled video game in which participants were required to observe about the relations between firing missiles and the tank destruction, consisted of three training phases and two rating phases. Groups C+ and C- consisted of same first two training phases, CT+ (cues C and T with an outcome) and T1T2+ followed by C+, or C- training for Groups C+, C-, respectively. In rating phases, it is clearly demonstrated that the judgment of predictive value for the outcome of the T2 were higher by C+ training (second-order unovershadowing) and lowered by C- training (second-order backward blocking). The results for Groups RC+ and RC-, in which the orders of the first two training phase for Groups C+ and C- were interchanged, also showed second-order unovershadowing and second-order backward blocking. These results, the robustness of second-order retrospective revaluation against the order of the first training phases, can be explained by the extended comparator hypothesis and probabilistic contrast model. However, these results cannot be explained by traditional associative learning models. PMID:19489431

  14. Predicting perceptual learning from higher-order cortical processing.

    PubMed

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. PMID:26391126

  15. Machine learning using a higher order correlation network

    SciTech Connect

    Lee, Y.C.; Doolen, G.; Chen, H.H.; Sun, G.Z.; Maxwell, T.; Lee, H.Y.

    1986-01-01

    A high-order correlation tensor formalism for neural networks is described. The model can simulate auto associative, heteroassociative, as well as multiassociative memory. For the autoassociative model, simulation results show a drastic increase in the memory capacity and speed over that of the standard Hopfield-like correlation matrix methods. The possibility of using multiassociative memory for a learning universal inference network is also discussed. 9 refs., 5 figs.

  16. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  17. Audio-Visual Aids: Historians in Blunderland.

    ERIC Educational Resources Information Center

    Decarie, Graeme

    1988-01-01

    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  18. Towards Postmodernist Television: INA's Audiovisual Magazine Programmes.

    ERIC Educational Resources Information Center

    Boyd-Bowman, Susan

    Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines, "Hieroglyphes,"…

  19. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  20. Audio-Visual Teaching Machines.

    ERIC Educational Resources Information Center

    Dorsett, Loyd G.

    An audiovisual teaching machine (AVTM) presents programed audio and visual material simultaneously to a student and accepts his response. If his response is correct, the machine proceeds with the lesson; if it is incorrect, the machine so indicates and permits another choice (linear) or automatically presents supplementary material (branching).…

  1. Audio-Visual Materials Catalog.

    ERIC Educational Resources Information Center

    Anderson (M.D.) Hospital and Tumor Inst., Houston, TX.

    This catalog lists 27 audiovisual programs produced by the Department of Medical Communications of the University of Texas M. D. Anderson Hospital and Tumor Institute for public distribution. Video tapes, 16 mm. motion pictures and slide/audio series are presented dealing mostly with cancer and related subjects. The programs are intended for…

  2. Audiovisual Media for Computer Education.

    ERIC Educational Resources Information Center

    Van Der Aa, H. J., Ed.

    The result of an international survey, this catalog lists over 450 films dealing with computing methods and automation and is intended for those who wish to use audiovisual displays as a means of instruction of computer education. The catalog gives the film's title, running time, and producer and tells whether the film is color or black-and-white,…

  3. Audio-Visual Resource Guide.

    ERIC Educational Resources Information Center

    Abrams, Nick, Ed.

    The National Council of Churches has assembled this extensive audiovisual guide for the benefit of schools, churches and community organizations. The guide is categorized into 14 distinct conceptual areas ranging from "God and the Church" to science, the arts, race relations, and national/international critical issues. Though assembled under the…

  4. A Basic Reference Shelf on Audio-Visual Instruction. A Series One Paper from ERIC at Stanford.

    ERIC Educational Resources Information Center

    Dale, Edgar; Trzebiatowski, Gregory

    Topics in this annotated bibliography on audiovisual instruction include the history of instructional technology, teacher-training, equipment operation, administration of media programs, production of instructional materials, language laboratories, instructional television, programed instruction, communication theory, learning theory, and…

  5. Learn locally, think globally. Exemplar variability supports higher-order generalization and word learning.

    PubMed

    Perry, Lynn K; Samuelson, Larissa K; Malloy, Lisa M; Schiffer, Ryan N

    2010-12-01

    Research suggests that variability of exemplars supports successful object categorization; however, the scope of variability's support at the level of higher-order generalization remains unexplored. Using a longitudinal study, we examined the role of exemplar variability in first- and second-order generalization in the context of nominal-category learning at an early age. Sixteen 18-month-old children were taught 12 categories. Half of the children were taught with sets of highly similar exemplars; the other half were taught with sets of dissimilar, variable exemplars. Participants' learning and generalization of trained labels and their development of more general word-learning biases were tested. All children were found to have learned labels for trained exemplars, but children trained with variable exemplars generalized to novel exemplars of these categories, developed a discriminating word-learning bias generalizing labels of novel solid objects by shape and labels of nonsolid objects by material, and accelerated in vocabulary acquisition. These findings demonstrate that object variability leads to better abstraction of individual and global category organization, which increases learning outside the laboratory. PMID:21106892

  6. A Distance Learning Model for Teaching Higher Order Thinking

    ERIC Educational Resources Information Center

    Notar, Charles E.; Wilson, Janell D.; Montgomery, Mary K.

    2005-01-01

    A teaching model for distance learning (DL) requires a system (a technology) and process (a way of linking resources) that makes distance learning no different than learning in the traditional classroom. The process must support a design that provides for learning, ensures maximum transfer, and is student-centered. The process must provide a…

  7. Assessment of Cognitive Load in Multimedia Learning with Dual-Task Methodology: Auditory Load and Modality Effects

    ERIC Educational Resources Information Center

    Brunken, Roland; Plass, Jan L.; Leutner, Detlev

    2004-01-01

    Using cognitive load theory and cognitive theory of multimedia learning as a framework, we conducted two within-subject experiments with 10 participants each in order to investigate (1) if the audiovisual presentation of verbal and pictorial learning materials would lead to a higher demand on phonological cognitive capacities than the visual-only…

  8. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    ERIC Educational Resources Information Center

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  9. Order of Presentation Effects in Learning Color Categories

    ERIC Educational Resources Information Center

    Sandhofer, Catherine M.; Doumas, Leonidas A. A.

    2008-01-01

    Two studies, an experimental category learning task and a computational simulation, examined how sequencing training instances to maximize comparison and memory affects category learning. In Study 1, 2-year-old children learned color categories with three training conditions that varied in how categories were distributed throughout training and…

  10. Improved Computer-Aided Instruction by the Use of Interfaced Random-Access Audio-Visual Equipment. Report on Research Project No. P/24/1.

    ERIC Educational Resources Information Center

    Bryce, C. F. A.; Stewart, A. M.

    A brief review of the characteristics of computer assisted instruction and the attributes of audiovisual media introduces this report on a project designed to improve the effectiveness of computer assisted learning through the incorporation of audiovisual materials. A discussion of the implications of research findings on the design and layout of…

  11. Encouraging Higher-Order Thinking in General Chemistry by Scaffolding Student Learning Using Marzano's Taxonomy

    ERIC Educational Resources Information Center

    Toledo, Santiago; Dubas, Justin M.

    2016-01-01

    An emphasis on higher-order thinking within the curriculum has been a subject of interest in the chemical and STEM literature due to its ability to promote meaningful, transferable learning in students. The systematic use of learning taxonomies could be a practical way to scaffold student learning in order to achieve this goal. This work proposes…

  12. In Focus: Alcohol and Alcoholism Audiovisual Guide.

    ERIC Educational Resources Information Center

    National Clearinghouse for Alcohol Information (DHHS), Rockville, MD.

    This guide reviews audiovisual materials currently available on alcohol abuse and alcoholism. An alphabetical index of audiovisual materials is followed by synopses of the indexed materials. Information about the intended audience, price, rental fee, and distributor is included. This guide also provides a list of publications related to media…

  13. Audio-Visual Aids in Universities

    ERIC Educational Resources Information Center

    Douglas, Jackie

    1970-01-01

    A report on the proceedings and ideas expressed at a one day seminar on "Audio-Visual Equipment--Its Uses and Applications for Teaching and Research in Universities." The seminar was organized by England's National Committee for Audio-Visual Aids in Education in conjunction with the British Universities Film Council. (LS)

  14. Solar Energy Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.

    This directory presents an annotated bibliography of non-print information resources dealing with solar energy. The document is divided by type of audio-visual medium, including: (1) Films, (2) Slides and Filmstrips, and (3) Videotapes. A fourth section provides addresses and telephone numbers of audiovisual aids sources, and lists the page…

  15. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. PMID:25863510

  16. Catalog of Audiovisual Materials Related to Rehabilitation.

    ERIC Educational Resources Information Center

    Mann, Joe, Ed.; Henderson, Jim, Ed.

    An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…

  17. Learning in Order To Teach in Chicxulub Puerto, Yucatan, Mexico.

    ERIC Educational Resources Information Center

    Wilber, Cynthia J.

    2000-01-01

    Describes a community-based computer education program for the young people (and adults) of Chicxulub Puerto, a small fishing village in Yucatan, Mexico. Notes the children learn Maya, Spanish, and English in the context of learning computer and telecommunication skills. Concludes that access to the Internet has made a profound difference in a…

  18. Audio-visual gender recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  19. Multi-strategy learning of search control for partial-order planning

    SciTech Connect

    Estlin, T.A.; Mooney, R.J.

    1996-12-31

    Most research in planning and learning has involved linear, state-based planners. This paper presents SCOPE, a system for learning search-control rules that improve the performance of a partial-order planner. SCOPE integrates explanation-based and inductive learning techniques to acquire control rules for a partial-order planner. Learned rules are in the form of selection heuristics that help the planner choose between competing plan refinements. Specifically, SCOPE learns domain-specific control rules for a version of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains.

  20. The Order of Learning: Essays on the Contemporary University.

    ERIC Educational Resources Information Center

    Shils, Edward

    The 14 essays in this book, written from 1938 through 1995, examine the modern research university, focusing on the relationship of these institutions to government, academic freedom, and the responsibilities of the academic profession. The book contends that the university has been deflected from its essential commitment to teaching, learning,…

  1. Conceptual Similarity Promotes Generalization of Higher Order Fear Learning

    ERIC Educational Resources Information Center

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to…

  2. Audiovisual Resources for Teaching Instructional Technology; an Annotated List of Materials.

    ERIC Educational Resources Information Center

    Ely, Donald P., Ed.; Beilby, Albert, Ed.

    The audiovisual resources listed in this catalog cover 10 instructional-technology topics: administration; facilities; instructional design; learning and communication; media equipment; media production; media utilization; research; instructional techniques; and society, education, and technology. Any entry falling into more than one category is…

  3. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    ERIC Educational Resources Information Center

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  4. Audiovisual Materials and Techniques for Teaching Foreign Languages: Recent Trends and Activities.

    ERIC Educational Resources Information Center

    Parks, Carolyn

    Recent experimentation with audio-visual (A-V) materials has provided insight into the language learning process. Researchers and teachers alike have recognized the importance of using A-V materials to achieve goals related to meaningful and relevant communication, retention and recall of language items, non-verbal aspects of communication, and…

  5. Nutrition Education Materials and Audiovisuals for Grades 7 through 12. Special Reference Briefs Series.

    ERIC Educational Resources Information Center

    Evans, Shirley King, Comp.

    This annotated bibliography lists nutrition education materials, audiovisuals, and resources for classroom use. Items listed cover topics such as general nutrition, food preparation, food science, and dietary management. Each item is listed in one or more of the following categories: (1) curriculum/lesson plans; (2) learning activities; (3)…

  6. No rapid audiovisual recalibration in adults on the autism spectrum.

    PubMed

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  7. No rapid audiovisual recalibration in adults on the autism spectrum

    PubMed Central

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  8. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  9. A checklist for planning and designing audiovisual facilities in health sciences libraries.

    PubMed Central

    Holland, G J; Bischoff, F A; Foxman, D S

    1984-01-01

    Developed by an MLA/HeSCA (Health Sciences Communications Association) joint committee, this checklist is intended to serve as a conceptual framework for planning a new or renovated audiovisual facility in a health sciences library. Emphasis is placed on the philosophical and organizational decisions that must be made about an audiovisual facility before the technical or spatial decisions can be wisely made. Specific standards for facilities or equipment are not included. The first section focuses on health sciences library settings. Ideas presented in the remaining sections could apply to academic learning resource center environments as well. A bibliography relating to all aspects of audiovisual facilities planning and design is included with references to specific sections of the checklist. PMID:6208957

  10. U.S. Government Films, 1971 Supplement; A Catalog of Audiovisual Materials for Rent and Sale by the National Audiovisual Center.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    The first edition of the National Audiovisual Center sales catalog (LI 003875) is updated by this supplement. Changes in price and order number as well as deletions from the 1969 edition, are noted in this 1971 version. Purchase and rental information for the sound films and silent filmstrips is provided. The broad subject categories are:…

  11. Perception of Dynamic and Static Audiovisual Sequences in 3- and 4-Month-Old Infants

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2008-01-01

    This study investigated perception of audiovisual sequences in 3- and 4-month-old infants. Infants were habituated to sequences consisting of moving/sounding or looming/sounding objects and then tested for their ability to detect changes in the order of the objects, sounds, or both. Results showed that 3-month-olds perceived the order of 3-element…

  12. The Current Status of Federal Audiovisual Policy and How These Policies Affect the National Audiovisual Center.

    ERIC Educational Resources Information Center

    Flood, R. Kevin

    The National Audiovisual Center was established in 1968 to provide a single organizational unit that serves as a central information point on completed audiovisual materials and a central sales point for the distribution of media that were produced by or for federal agencies. This speech describes the services the center can provide users of…

  13. Patient Education in the Doctor's Office: A Trial of Audiovisual Cassettes

    PubMed Central

    Bryant, William H.

    1980-01-01

    Audiovisual tapes for patient education are now available in Canada. This paper summarizes the utilization of 12 tapes in an urban solo family practice over one year. Evaluation of this learning experience by both the physician and the patient showed positive results, in some cases affecting the outcome of the patient's condition. This patient education aid is intended to provide information only and is not subject to learning analysis.

  14. Beyond Course Availability: An Investigation into Order and Concurrency Effects of Undergraduate Programming Courses on Learning.

    ERIC Educational Resources Information Center

    Urbaczewski, Andrew; Urbaczewski, Lise

    The objective of this study was to find the answers to two primary research questions: "Do students learn programming languages better when they are offered in a particular order, such as 4th generation languages before 3rd generation languages?"; and "Do students learn programming languages better when they are taken in separate semesters as…

  15. No Solid Empirical Evidence for the SOLID (Serial Order Learning Impairment) Hypothesis of Dyslexia

    ERIC Educational Resources Information Center

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    This article reports on 2 studies that attempted to replicate the findings of a study by Szmalec, Loncke, Page, and Duyck (2011) on Hebb repetition learning in dyslexic individuals, from which these authors concluded that dyslexics suffer from a deficit in long-term learning of serial order information. In 2 experiments, 1 on adolescents (N = 59)…

  16. Strategic Learning in Youth with Traumatic Brain Injury: Evidence for Stall in Higher-Order Cognition

    ERIC Educational Resources Information Center

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Cook, Lori G.

    2009-01-01

    Little is known about strategic learning ability in preteens and adolescents with traumatic brain injury (TBI). Strategic learning is the ability to combine and synthesize details to form abstracted gist-based meanings, a higher-order cognitive skill associated with frontal lobe functions and higher classroom performance. Summarization tasks were…

  17. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  18. Multilabel image classification via high-order label correlation driven active learning.

    PubMed

    Zhang, Bang; Wang, Yang; Chen, Fang

    2014-03-01

    Supervised machine learning techniques have been applied to multilabel image classification problems with tremendous success. Despite disparate learning mechanisms, their performances heavily rely on the quality of training images. However, the acquisition of training images requires significant efforts from human annotators. This hinders the applications of supervised learning techniques to large scale problems. In this paper, we propose a high-order label correlation driven active learning (HoAL) approach that allows the iterative learning algorithm itself to select the informative example-label pairs from which it learns so as to learn an accurate classifier with less annotation efforts. Four crucial issues are considered by the proposed HoAL: 1) unlike binary cases, the selection granularity for multilabel active learning need to be fined from example to example-label pair; 2) different labels are seldom independent, and label correlations provide critical information for efficient learning; 3) in addition to pair-wise label correlations, high-order label correlations are also informative for multilabel active learning; and 4) since the number of label combinations increases exponentially with respect to the number of labels, an efficient mining method is required to discover informative label correlations. The proposed approach is tested on public data sets, and the empirical results demonstrate its effectiveness. PMID:24723538

  19. Positive Emotion Facilitates Audiovisual Binding

    PubMed Central

    Kitamura, Miho S.; Watanabe, Katsumi; Kitagawa, Norimichi

    2016-01-01

    It has been shown that positive emotions can facilitate integrative and associative information processing in cognitive functions. The present study examined whether emotions in observers can also enhance perceptual integrative processes. We tested 125 participants in total for revealing the effects of emotional states and traits in observers on the multisensory binding between auditory and visual signals. Participants in Experiment 1 observed two identical visual disks moving toward each other, coinciding, and moving away, presented with a brief sound. We found that for participants with lower depressive tendency, induced happy moods increased the width of the temporal binding window of the sound-induced bounce percept in the stream/bounce display, while no effect was found for the participants with higher depressive tendency. In contrast, no effect of mood was observed for a simple audiovisual simultaneity discrimination task in Experiment 2. These results provide the first empirical evidence of a dependency of multisensory binding upon emotional states and traits, revealing that positive emotions can facilitate the multisensory binding processes at a perceptual level. PMID:26834585

  20. Audiovisual integration facilitates unconscious visual scene processing.

    PubMed

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration. PMID:26076179

  1. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  2. Practitioners' Views on Teaching With Audio-Visual Aids.

    ERIC Educational Resources Information Center

    Potter, Earl L., Comp.

    A guide for teaching with audiovisual aids, based on the in-class experiences of 30 faculty members from Memphis State University and Shelby State Community College, is presented. The faculty members represented 20 instructional areas and the range of audiovisual usage included in-class use of traditional audiovisual materials and techniques, the…

  3. Govt. Pubs: U.S. Government Produced Audiovisual Materials.

    ERIC Educational Resources Information Center

    Korman, Richard

    1981-01-01

    Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…

  4. Audiovisual Speech Synchrony Measure: Application to Biometrics

    NASA Astrophysics Data System (ADS)

    Bredin, Hervé; Chollet, Gérard

    2007-12-01

    Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  5. Learning Partnership: Students and Faculty Learning Together to Facilitate Reflection and Higher Order Thinking in a Blended Course

    ERIC Educational Resources Information Center

    McDonald, Paige L.; Straker, Howard O.; Schlumpf, Karen S.; Plack, Margaret M.

    2014-01-01

    This article discusses a learning partnership among faculty and students to influence reflective practice in a blended course. Faculty redesigned a traditional face-to-face (FTF) introductory physician assistant course into a blended course to promote increased reflection and higher order thinking. Early student reflective writing suggested a need…

  6. Active Methodology in the Audiovisual Communication Degree

    ERIC Educational Resources Information Center

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  7. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  8. A Selection of Audiovisual Materials on Disabilities.

    ERIC Educational Resources Information Center

    Mayo, Kathleen; Rider, Sheila

    Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…

  9. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  10. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  11. Audiovisual Instruction in Pediatric Pharmacy Practice.

    ERIC Educational Resources Information Center

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  12. The Status of Audiovisual Materials in Networking.

    ERIC Educational Resources Information Center

    Coty, Patricia Ann

    1983-01-01

    The role of networks in correcting inadequate bibliographic control for audiovisual materials is discussed, citing efforts of Project Media Base, National Information Center for Educational Media, Consortium of University Film Centers, National Library of Medicine, National Agricultural Library, National Film Board of Canada, and bibliographic…

  13. Reduced audiovisual recalibration in the elderly

    PubMed Central

    Chan, Yu Man; Pianta, Michael J.; McKendrick, Allison M.

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22–32 years old) and 15 older (64–74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age. PMID:25221508

  14. A Survey of British Research in Audio-Visual Aids, Supplement No. 2, 1974. (Including Cumulative Index 1945-1974).

    ERIC Educational Resources Information Center

    Rodwell, Susie, Comp.

    The second supplement to the new (1972) edition of the Survey of Research in Audiovisual Aids carried out in Great Britain covers the year 1974. Ten separate sections cover the areas of projected media, non-projected media, sound media, radio, moving pictures, television, teaching machines and programed learning, computer-assisted instruction,…

  15. A Framework for Efficient Structured Max-Margin Learning of High-Order MRF Models.

    PubMed

    Komodakis, Nikos; Xiang, Bo; Paragios, Nikos

    2015-07-01

    We present a very general algorithm for structured prediction learning that is able to efficiently handle discrete MRFs/CRFs (including both pairwise and higher-order models) so long as they can admit a decomposition into tractable subproblems. At its core, it relies on a dual decomposition principle that has been recently employed in the task of MRF optimization. By properly combining such an approach with a max-margin learning method, the proposed framework manages to reduce the training of a complex high-order MRF to the parallel training of a series of simple slave MRFs that are much easier to handle. This leads to a very efficient and general learning scheme that relies on solid mathematical principles. We thoroughly analyze its theoretical properties, and also show that it can yield learning algorithms of increasing accuracy since it naturally allows a hierarchy of convex relaxations to be used for loss-augmented MAP-MRF inference within a max-margin learning approach. Furthermore, it can be easily adapted to take advantage of the special structure that may be present in a given class of MRFs. We demonstrate the generality and flexibility of our approach by testing it on a variety of scenarios, including training of pairwise and higher-order MRFs, training by using different types of regularizers and/or different types of dissimilarity loss functions, as well as by learning of appropriate models for a variety of vision tasks (including high-order models for compact pose-invariant shape priors, knowledge-based segmentation, image denoising, stereo matching as well as high-order Potts MRFs). PMID:26352450

  16. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  17. Information-Driven Active Audio-Visual Source Localization.

    PubMed

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application. PMID:26327619

  18. Mental representations of magnitude and order: a dissociation by sensorimotor learning.

    PubMed

    Badets, Arnaud; Boutin, Arnaud; Heuer, Herbert

    2015-05-01

    Numbers and spatially directed actions share cognitive representations. This assertion is derived from studies that have demonstrated that the processing of small- and large-magnitude numbers facilitates motor behaviors that are directed to the left and right, respectively. However, little is known about the role of sensorimotor learning for such number-action associations. In this study, we show that sensorimotor learning in a serial reaction-time task can modify the associations between number magnitudes and spatially directed movements. Experiments 1 and 3 revealed that this effect is present only for the learned sequence and does not transfer to a novel unpracticed sequence. Experiments 2 and 4 showed that the modification of stimulus-action associations by sensorimotor learning does not occur for other sets of ordered stimuli such as letters of the alphabet. These results strongly suggest that numbers and actions share a common magnitude representation that differs from the common order representation shared by letters and spatially directed actions. Only the magnitude representation, but not the order representation, can be modified episodically by sensorimotor learning. PMID:25813898

  19. Serial-order learning impairment and hypersensitivity-to-interference in dyscalculia.

    PubMed

    De Visscher, Alice; Szmalec, Arnaud; Van Der Linden, Lize; Noël, Marie-Pascale

    2015-11-01

    In the context of heterogeneity, the different profiles of dyscalculia are still hypothetical. This study aims to link features of mathematical difficulties to certain potential etiologies. First, we wanted to test the hypothesis of a serial-order learning deficit in adults with dyscalculia. For this purpose we used a Hebb repetition learning task. Second, we wanted to explore a recent hypothesis according to which hypersensitivity-to-interference hampers the storage of arithmetic facts and leads to a particular profile of dyscalculia. We therefore used interfering and non-interfering repeated sequences in the Hebb paradigm. A final test was used to assess the memory trace of the non-interfering sequence and the capacity to manipulate it. In line with our predictions, we observed that people with dyscalculia who show good conceptual knowledge in mathematics but impaired arithmetic fluency suffer from increased sensitivity-to-interference compared to controls. Secondly, people with dyscalculia who show a deficit in a global mathematical test suffer from a serial-order learning deficit characterized by a slow learning and a quick degradation of the memory trace of the repeated sequence. A serial-order learning impairment could be one of the explanations for a basic numerical deficit, since it is necessary for the number-word sequence acquisition. Among the different profiles of dyscalculia, this study provides new evidence and refinement for two particular profiles. PMID:26218516

  20. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution

    PubMed Central

    Phillips, Steven; Wilson, William H.

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other “structurally related” cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  1. Higher-Order Thinking Development through Adaptive Problem-Based Learning

    ERIC Educational Resources Information Center

    Raiyn, Jamal; Tilchin, Oleg

    2015-01-01

    In this paper we propose an approach to organizing Adaptive Problem-Based Learning (PBL) leading to the development of Higher-Order Thinking (HOT) skills and collaborative skills in students. Adaptability of PBL is expressed by changes in fixed instructor assessments caused by the dynamics of developing HOT skills needed for problem solving,…

  2. Sensitivity to Word Order Cues by Normal and Language/Learning Disabled Adults.

    ERIC Educational Resources Information Center

    Plante, Elena; Gomez, Rebecca; Gerken, LouAnn

    2002-01-01

    Sixteen adults with language/learning disabilities (L/LD) and 16 controls participated in a study testing sensitivity to word order cues that signaled grammatical versus ungrammatical word strings belonging to an artificial grammar. Participants with L/LD performed significantly below the comparison group, suggesting that this skill is problematic…

  3. "What Do I Do Here?": Higher Order Learning Effects of Enhancing Task Instructions

    ERIC Educational Resources Information Center

    Chamberlain, Susanna; Zuvela, Danni

    2014-01-01

    This paper reports the findings of a one-year research project focused on a series of structured interventions aimed at enhancing task instruction to develop students' understanding of higher assessment practices, and encouraging higher order learning. It describes the nature and iterations of the interventions, made into a large-enrolment online…

  4. Learning and Generalization on Asynchrony and Order Tasks at Sound Offset: Implications for Underlying Neural Circuitry

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.

    2008-01-01

    Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…

  5. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution.

    PubMed

    Phillips, Steven; Wilson, William H

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other "structurally related" cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  6. Developing Student-Centered Learning Model to Improve High Order Mathematical Thinking Ability

    ERIC Educational Resources Information Center

    Saragih, Sahat; Napitupulu, Elvis

    2015-01-01

    The purpose of this research was to develop student-centered learning model aiming to improve high order mathematical thinking ability of junior high school students of based on curriculum 2013 in North Sumatera, Indonesia. The special purpose of this research was to analyze and to formulate the purpose of mathematics lesson in high order…

  7. Linking memory and language: Evidence for a serial-order learning impairment in dyslexia.

    PubMed

    Bogaerts, Louisa; Szmalec, Arnaud; Hachmann, Wibke M; Page, Mike P A; Duyck, Wouter

    2015-01-01

    The present study investigated long-term serial-order learning impairments, operationalized as reduced Hebb repetition learning (HRL), in people with dyslexia. In a first multi-session experiment, we investigated both the persistence of a serial-order learning impairment as well as the long-term retention of serial-order representations, both in a group of Dutch-speaking adults with developmental dyslexia and in a matched control group. In a second experiment, we relied on the assumption that HRL mimics naturalistic word-form acquisition and we investigated the lexicalization of novel word-forms acquired through HRL. First, our results demonstrate that adults with dyslexia are fundamentally impaired in the long-term acquisition of serial-order information. Second, dyslexic and control participants show comparable retention of the long-term serial-order representations in memory over a period of 1 month. Third, the data suggest weaker lexicalization of newly acquired word-forms in the dyslexic group. We discuss the integration of these findings into current theoretical views of dyslexia. PMID:26164302

  8. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  9. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    SciTech Connect

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  10. Attributes of Quality in Audiovisual Materials for Health Professionals.

    ERIC Educational Resources Information Center

    Suter, Emanuel; Waddell, Wendy H.

    1981-01-01

    Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)

  11. Dynamic Perceptual Changes in Audiovisual Simultaneity

    PubMed Central

    Kanai, Ryota; Sheth, Bhavin R.; Verstraten, Frans A. J.; Shimojo, Shinsuke

    2007-01-01

    Background The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset. Methodology/Principal Findings Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event. Conclusions/Significance This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event. PMID:18060050

  12. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention. PMID:25341648

  13. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. PMID:26740404

  14. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  15. Audio-visual speech perception: a developmental ERP investigation.

    PubMed

    Knowland, Victoria C P; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S C

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  16. Order short-term memory is not impaired in dyslexia and does not affect orthographic learning

    PubMed Central

    Staels, Eva; Van den Broeck, Wim

    2014-01-01

    This article reports two studies that investigate short-term memory (STM) deficits in dyslexic children and explores the relationship between STM and reading acquisition. In the first experiment, 36 dyslexic children and 61 control children performed an item STM task and a serial order STM task. The results of this experiment show that dyslexic children do not suffer from a specific serial order STM deficit. In addition, the results demonstrate that phonological processing skills are as closely related to both item STM and serial order STM. However, non-verbal intelligence was more strongly involved in serial order STM than in item STM. In the second experiment, the same two STM tasks were administered and reading acquisition was assessed by measuring orthographic learning in a group of 188 children. The results of this study show that orthographic learning is exclusively related to item STM and not to order STM. It is concluded that serial order STM is not the right place to look for a causal explanation of reading disability, nor for differences in word reading acquisition. PMID:25294996

  17. Order short-term memory is not impaired in dyslexia and does not affect orthographic learning.

    PubMed

    Staels, Eva; Van den Broeck, Wim

    2014-01-01

    This article reports two studies that investigate short-term memory (STM) deficits in dyslexic children and explores the relationship between STM and reading acquisition. In the first experiment, 36 dyslexic children and 61 control children performed an item STM task and a serial order STM task. The results of this experiment show that dyslexic children do not suffer from a specific serial order STM deficit. In addition, the results demonstrate that phonological processing skills are as closely related to both item STM and serial order STM. However, non-verbal intelligence was more strongly involved in serial order STM than in item STM. In the second experiment, the same two STM tasks were administered and reading acquisition was assessed by measuring orthographic learning in a group of 188 children. The results of this study show that orthographic learning is exclusively related to item STM and not to order STM. It is concluded that serial order STM is not the right place to look for a causal explanation of reading disability, nor for differences in word reading acquisition. PMID:25294996

  18. Ordering and finding the best of K > 2 supervised learning algorithms.

    PubMed

    Yildiz, Olcay Taner; Alpaydin, Ethem

    2006-03-01

    Given a data set and a number of supervised learning algorithms, we would like to find the algorithm with the smallest expected error. Existing pairwise tests allow a comparison of two algorithms only; range tests and ANOVA check whether multiple algorithms have the same expected error and cannot be used for finding the smallest. We propose a methodology, the MultiTest algorithm, whereby we order supervised learning algorithms taking into account 1) the result of pairwise statistical tests on expected error (what the data tells us), and 2) our prior preferences, e.g., due to complexity. We define the problem in graph-theoretic terms and propose an algorithm to find the "best" learning algorithm in terms of these two criteria, or in the more general case, order learning algorithms in terms of their "goodness." Simulation results using five classification algorithms on 30 data sets indicate the utility of the method. Our proposed method can be generalized to regression and other loss functions by using a suitable pairwise test. PMID:16526425

  19. Disruption of Broca's Area Alters Higher-order Chunking Processing during Perceptual Sequence Learning.

    PubMed

    Alamia, Andrea; Solopchuk, Oleg; D'Ausilio, Alessandro; Van Bever, Violette; Fadiga, Luciano; Olivier, Etienne; Zénon, Alexandre

    2016-03-01

    Because Broca's area is known to be involved in many cognitive functions, including language, music, and action processing, several attempts have been made to propose a unifying theory of its role that emphasizes a possible contribution to syntactic processing. Recently, we have postulated that Broca's area might be involved in higher-order chunk processing during implicit learning of a motor sequence. Chunking is an information-processing mechanism that consists of grouping consecutive items in a sequence and is likely to be involved in all of the aforementioned cognitive processes. Demonstrating a contribution of Broca's area to chunking during the learning of a nonmotor sequence that does not involve language could shed new light on its function. To address this issue, we used offline MRI-guided TMS in healthy volunteers to disrupt the activity of either the posterior part of Broca's area (left Brodmann's area [BA] 44) or a control site just before participants learned a perceptual sequence structured in distinct hierarchical levels. We found that disruption of the left BA 44 increased the processing time of stimuli representing the boundaries of higher-order chunks and modified the chunking strategy. The current results highlight the possible role of the left BA 44 in building up effector-independent representations of higher-order events in structured sequences. This might clarify the contribution of Broca's area in processing hierarchical structures, a key mechanism in many cognitive functions, such as language and composite actions. PMID:26765778

  20. A second-order learning algorithm for multilayer networks based on block Hessian matrix.

    PubMed

    Wang, Yi Jen; Lin, Chin Teng

    1998-12-01

    This article proposes a new second-order learning algorithm for training the multilayer perceptron (MLP) networks. The proposed algorithm is a revised Newton's method. A forward-backward propagation scheme is first proposed for network computation of the Hessian matrix, H, of the output error function of the MLP. A block Hessian matrix, H(b), is then defined to approximate and simplify H. Several lemmas and theorems are proved to uncover the important properties of H and H(b), and verify the good approximation of H(b) to H; H(b) preserves the major properties of H. The theoretic analysis leads to the development of an efficient way for computing the inverse of H(b) recursively. In the proposed second-order learning algorithm, the least squares estimation technique is adopted to further lessen the local minimum problems. The proposed algorithm overcomes not only the drawbacks of the standard backpropagation algorithm (i.e. slow asymptotic convergence rate, bad controllability of convergence accuracy, local minimum problems, and high sensitivity to learning constant), but also the shortcomings of normal Newton's method used on the MLP, such as the lack of network implementation of H, ill representability of the diagonal terms of H, the heavy computation load of the inverse of H, and the requirement of a good initial estimate of the solution (weights). Several example problems are used to demonstrate the efficiency of the proposed learning algorithm. Extensive performance (convergence rate and accuracy) comparisons of the proposed algorithm with other learning schemes (including the standard backpropagation algorithm) are also made. PMID:12662732

  1. Audiovisual Media and the Disabled. AV in Action 1.

    ERIC Educational Resources Information Center

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2) "Danish…

  2. Audiovisual Integration in High Functioning Adults with Autism

    ERIC Educational Resources Information Center

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  3. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  4. Neural Correlates of Audiovisual Integration of Semantic Category Information

    ERIC Educational Resources Information Center

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  5. The Audio-Visual Marketing Handbook for Independent Schools.

    ERIC Educational Resources Information Center

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  6. Directory of Head Start Audiovisual Professional Training Materials.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.

    The directory contains over 265 annotated listings of audiovisual professional training materials related to the education and care of preschool handicapped children. Noted in the introduction are sources of the contents, such as lists of audiovisual materials disseminated by a hearing/speech center, and instructions for use of the directory.…

  7. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    ERIC Educational Resources Information Center

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  8. The Practical Audio-Visual Handbook for Teachers.

    ERIC Educational Resources Information Center

    Scuorzo, Herbert E.

    The use of audio/visual media as an aid to instruction is a common practice in today's classroom. Most teachers, however, have little or no formal training in this field and rarely a knowledgeable coordinator to help them. "The Practical Audio-Visual Handbook for Teachers" discusses the types and mechanics of many of these media forms and proposes…

  9. Uses and Abuses of Audio-Visual Aids in Reading.

    ERIC Educational Resources Information Center

    Eggers, Edwin H.

    Audiovisual aids are properly used in reading when they "turn students on," and they are abused when they fail to do so or when they actually "turn students off." General guidelines one could use in sorting usable from unusable aids are (1) Has the teacher saved time by using an audiovisual aid? (2) Is the aid appropriate to the sophistication…

  10. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of getting to…

  11. Audiovisual Processing in Children with and without Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Mongillo, Elizabeth A.; Irwin, Julia R.; Whalen, D. H.; Klaiman, Cheryl; Carter, Alice S.; Schultz, Robert T.

    2008-01-01

    Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces…

  12. A Technical Communication Course in Graphics and Audiovisuals.

    ERIC Educational Resources Information Center

    Carson, David L.; Harkins, Craig

    1980-01-01

    Describes the development of a course in graphics and audiovisuals as they are applied in technical communication. Includes brief discussions of the course design, general course structure, course objectives, course content, student evaluation, and student reaction. Indicates that the course includes information on theory, graphics, audiovisuals,…

  13. The Audio-Visual Equipment Director. Eighteenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    A cooperative undertaking of the audiovisual industry, this equipment directory for 1972-73 is designed to offer everyone who uses media a convenient, single source of information on all audiovisual equipment on the market today. Photographs, specifications, and prices of more than 1,500 models of equipment are provided, and over 520 manufacturers…

  14. Children Using Audiovisual Media for Communication: A New Language?

    ERIC Educational Resources Information Center

    Weiss, Michael

    1982-01-01

    Gives an overview of the Schools Council Communication and Social Skills Project at Brighton Polytechnic in which children ages 9-17 have developed and used audiovisual media such as films, tape-slides, or television programs in the classroom. The effects of audiovisual language on education are briefly discussed. (JJD)

  15. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  16. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  17. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  18. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  19. Simulated and Virtual Science Laboratory Experiments: Improving Critical Thinking and Higher-Order Learning Skills

    NASA Astrophysics Data System (ADS)

    Simon, Nicole A.

    Virtual laboratory experiments using interactive computer simulations are not being employed as viable alternatives to laboratory science curriculum at extensive enough rates within higher education. Rote traditional lab experiments are currently the norm and are not addressing inquiry, Critical Thinking, and cognition throughout the laboratory experience, linking with educational technologies (Pyatt & Sims, 2007; 2011; Trundle & Bell, 2010). A causal-comparative quantitative study was conducted with 150 learners enrolled at a two-year community college, to determine the effects of simulation laboratory experiments on Higher-Order Learning, Critical Thinking Skills, and Cognitive Load. The treatment population used simulated experiments, while the non-treatment sections performed traditional expository experiments. A comparison was made using the Revised Two-Factor Study Process survey, Motivated Strategies for Learning Questionnaire, and the Scientific Attitude Inventory survey, using a Repeated Measures ANOVA test for treatment or non-treatment. A main effect of simulated laboratory experiments was found for both Higher-Order Learning, [F (1, 148) = 30.32,p = 0.00, eta2 = 0.12] and Critical Thinking Skills, [F (1, 148) = 14.64,p = 0.00, eta 2 = 0.17] such that simulations showed greater increases than traditional experiments. Post-lab treatment group self-reports indicated increased marginal means (+4.86) in Higher-Order Learning and Critical Thinking Skills, compared to the non-treatment group (+4.71). Simulations also improved the scientific skills and mastery of basic scientific subject matter. It is recommended that additional research recognize that learners' Critical Thinking Skills change due to different instructional methodologies that occur throughout a semester.

  20. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed Central

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  1. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed

    Lerner, Itamar; Armstrong, Blair C; Frost, Ram

    2014-11-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  2. Word sense disambiguation via high order of learning in complex networks

    NASA Astrophysics Data System (ADS)

    Silva, Thiago C.; Amancio, Diego R.

    2012-06-01

    Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model.

  3. A Step Into Service Learning Is A Step Into Higher Order Thinking

    NASA Astrophysics Data System (ADS)

    O'Connell, S.

    2010-12-01

    Students, especially beginning college students often consider science courses to be remembering and regurgitating, not creative and of little social relevance. As scientists we know this isn’t true. How do we counteract this sentiment among students? Incorporating service learning, probably better-called project learning into our class is one way. As one “non-science” student, who was taking two science service-learning courses said, “If it’s a service-learning course you know it’s going to be interesting.” Service learning means that some learning takes place in the community. The community component increases understanding of the material being studied, promotes higher order thinking, and provides a benefit for someone else. Students have confirmed that the experience shows them that their knowledge is need by the community and for some, reinforces their commitment to continued civic engagement. I’ll give three examples with the community activity growing in importance in the course and in the community, a single exercise, a small project, and a focus of the class. All of the activities use reflective writing to increase analysis and synthesis. An example of a single exercise could be participating in an event related to your course, for example, a zoning board meeting, or a trip to a wastewater treatment plant. Preparation for the trip should include reading. After the event students synthesize and analyze the activity through a series of questions emphasizing reflection. A two to four class assignment might include expanding the single-day activity or students familiarizing themselves with a course topic, interviewing a person, preparing a podcast of the interview and reflecting upon the experience. The most comprehensive approach is where the class focus is on a community project, e.g. Tim Ku’s geochemistry course (this session). Another class that lends itself easily to a comprehensive service learning approach is Geographic Information

  4. The Use of Audio-Visual Aids in Teaching: A Study in the Saudi Girls Colleges.

    ERIC Educational Resources Information Center

    Al-Sharhan, Jamal A.

    1993-01-01

    A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…

  5. Categorization of Natural Dynamic Audiovisual Scenes

    PubMed Central

    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville

    2014-01-01

    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database. PMID:24788808

  6. Audiovisual time perception is spatially specific.

    PubMed

    Heron, James; Roach, Neil W; Hanson, James V M; McGraw, Paul V; Whitaker, David

    2012-05-01

    Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways. PMID:22367399

  7. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group. PMID:25324091

  8. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  9. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    PubMed Central

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  10. Interpolation-based reduced-order modelling for steady transonic flows via manifold learning

    NASA Astrophysics Data System (ADS)

    Franz, T.; Zimmermann, R.; Görtz, S.; Karcher, N.

    2014-03-01

    This paper presents a parametric reduced-order model (ROM) based on manifold learning (ML) for use in steady transonic aerodynamic applications. The main objective of this work is to derive an efficient ROM that exploits the low-dimensional nonlinear solution manifold to ensure an improved treatment of the nonlinearities involved in varying the inflow conditions to obtain an accurate prediction of shocks. The reduced-order representation of the data is derived using the Isomap ML method, which is applied to a set of sampled computational fluid dynamics (CFD) data. In order to develop a ROM that has the ability to predict approximate CFD solutions at untried parameter combinations, Isomap is coupled with an interpolation method to capture the variations in parameters like the angle of attack or the Mach number. Furthermore, an approximate local inverse mapping from the reduced-order representation to the full CFD solution space is introduced. The proposed ROM, called Isomap+I, is applied to the two-dimensional NACA 64A010 airfoil and to the 3D LANN wing. The results are compared to those obtained by proper orthogonal decomposition plus interpolation (POD+I) and to the full-order CFD model.

  11. The impact of constructivist teaching strategies on the acquisition of higher order cognition and learning

    NASA Astrophysics Data System (ADS)

    Merrill, Alison Saricks

    The purpose of this quasi-experimental quantitative mixed design study was to compare the effectiveness of brain-based teaching strategies versus a traditional lecture format in the acquisition of higher order cognition as determined by test scores. A second purpose was to elicit student feedback about the two teaching approaches. The design was a 2 x 2 x 2 factorial design study with repeated measures on the last factor. The independent variables were type of student, teaching method, and a within group change over time. Dependent variables were a between group comparison of pre-test, post-test gain scores and a within and between group comparison of course examination scores. A convenience sample of students enrolled in medical-surgical nursing was used. One group (n=36) was made up of traditional students and the other group (n=36) consisted of second-degree students. Four learning units were included in this study. Pre- and post-tests were given on the first two units. Course examinations scores from all four units were compared. In one cohort two of the units were taught via lecture format and two using constructivist activities. These methods were reversed for the other cohort. The conceptual basis for this study derives from neuroscience and cognitive psychology. Learning is defined as the growth of new dendrites. Cognitive psychologists view learning as a constructive activity in which new knowledge is built on an internal foundation of existing knowledge. Constructivist teaching strategies are designed to stimulate the brain's natural learning ability. There was a statistically significant difference based on type of teaching strategy (t = -2.078, df = 270, p = .039, d = .25)) with higher mean scores on the examinations covering brain-based learning units. There was no statistical significance based on type of student. Qualitative data collection was conducted in an on-line forum at the end of the semester. Students had overall positive responses about the

  12. Seeing the unseen: Second-order correlation learning in 7- to 11-month-olds.

    PubMed

    Yermolayeva, Yevdokiya; Rakison, David H

    2016-07-01

    We present four experiments with the object-examining procedure that investigated 7-, 9-, and 11-month-olds' ability to associate two object features that were never presented simultaneously. In each experiment, infants were familiarized with a number of 3D objects that incorporated different correlations among the features of those objects and the body of the objects (e.g., Part A and Body 1, and Part B and Body 1). Infants were then tested with objects with a novel body that either possessed both of the parts that were independently correlated with one body during familiarization (e.g., Part A and B on Body 3) or that were attached to two different bodies during familiarization. The experiments demonstrate that infants as young as 7months of age are capable of this kind of second-order correlation learning. Furthermore, by at least 11months of age infants develop a representation for the object that incorporates both of the features they experienced during training. We suggest that the ability to learn second-order correlations represents a powerful but as yet largely unexplored process for generalization in the first years of life. PMID:27038738

  13. Learning to Order Words: A Connectionist Model of Heavy NP Shift and Accessibility Effects in Japanese and English

    ERIC Educational Resources Information Center

    Chang, Franklin

    2009-01-01

    Languages differ from one another and must therefore be learned. Processing biases in word order can also differ across languages. For example, heavy noun phrases tend to be shifted to late sentence positions in English, but to early positions in Japanese. Although these language differences suggest a role for learning, most accounts of these…

  14. Lexical Learning in Bilingual Adults: The Relative Importance of Short-Term Memory for Serial Order and Phonological Knowledge

    ERIC Educational Resources Information Center

    Majerus, Steve; Poncelet, Martine; Van der Linden, Martial; Weekes, Brendan S.

    2008-01-01

    Studies of monolingual speakers have shown a strong association between lexical learning and short-term memory (STM) capacity, especially STM for serial order information. At the same time, studies of bilingual speakers suggest that phonological knowledge is the main factor that drives lexical learning. This study tested these two hypotheses…

  15. An Investigation of Four Hypotheses Concerning the Order by Which 4-Year-Old Children Learn the Alphabet Letters

    ERIC Educational Resources Information Center

    Justice, Laura M.; Pence, Khara; Bowles, Ryan B.; Wiggins, Alice

    2006-01-01

    This study tested four complementary hypotheses to characterize intrinsic and extrinsic influences on the order with which preschool children learn the names of individual alphabet letters. The hypotheses included: (a) "own-name advantage," which states that children learn those letters earlier which occur in their own names, (b) the "letter-order…

  16. Automatic audiovisual integration in speech perception.

    PubMed

    Gentilucci, Maurizio; Cattaneo, Luigi

    2005-11-01

    Two experiments aimed to determine whether features of both the visual and acoustical inputs are always merged into the perceived representation of speech and whether this audiovisual integration is based on either cross-modal binding functions or on imitation. In a McGurk paradigm, observers were required to repeat aloud a string of phonemes uttered by an actor (acoustical presentation of phonemic string) whose mouth, in contrast, mimicked pronunciation of a different string (visual presentation). In a control experiment participants read the same printed strings of letters. This condition aimed to analyze the pattern of voice and the lip kinematics controlling for imitation. In the control experiment and in the congruent audiovisual presentation, i.e. when the articulation mouth gestures were congruent with the emission of the string of phones, the voice spectrum and the lip kinematics varied according to the pronounced strings of phonemes. In the McGurk paradigm the participants were unaware of the incongruence between visual and acoustical stimuli. The acoustical analysis of the participants' spoken responses showed three distinct patterns: the fusion of the two stimuli (the McGurk effect), repetition of the acoustically presented string of phonemes, and, less frequently, of the string of phonemes corresponding to the mouth gestures mimicked by the actor. However, the analysis of the latter two responses showed that the formant 2 of the participants' voice spectra always differed from the value recorded in the congruent audiovisual presentation. It approached the value of the formant 2 of the string of phonemes presented in the other modality, which was apparently ignored. The lip kinematics of the participants repeating the string of phonemes acoustically presented were influenced by the observation of the lip movements mimicked by the actor, but only when pronouncing a labial consonant. The data are discussed in favor of the hypothesis that features of both

  17. Teleconferences and Audiovisual Materials in Earth Science Education

    NASA Astrophysics Data System (ADS)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  18. Audiovisual associations alter the perception of low-level visual motion.

    PubMed

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  19. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  20. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  1. Sight and sound out of synch: Fragmentation and renormalisation of audiovisual integration and subjective timing

    PubMed Central

    Freeman, Elliot D.; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon

    2013-01-01

    The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream–Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing

  2. Audiovisual signal compression: the 64/P codecs

    NASA Astrophysics Data System (ADS)

    Jayant, Nikil S.

    1996-02-01

    Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality

  3. When audiovisual correspondence disturbs visual processing.

    PubMed

    Hong, Sang Wook; Shim, Won Mok

    2016-05-01

    Multisensory integration is known to create a more robust and reliable perceptual representation of one's environment. Specifically, a congruent auditory input can make a visual stimulus more salient, consequently enhancing the visibility and detection of the visual target. However, it remains largely unknown whether a congruent auditory input can also impair visual processing. In the current study, we demonstrate that temporally congruent auditory input disrupts visual processing, consequently slowing down visual target detection. More importantly, this cross-modal inhibition occurs only when the contrast of visual targets is high. When the contrast of visual targets is low, enhancement of visual target detection is observed, consistent with the prediction based on the principle of inverse effectiveness (PIE) in cross-modal integration. The switch of the behavioral effect of audiovisual interaction from benefit to cost further extends the PIE to encompass the suppressive cross-modal interaction. PMID:26884130

  4. Audiovisual Materials and Programming for Children: A Long Tradition.

    ERIC Educational Resources Information Center

    Doll, Carol A.

    1992-01-01

    Explores the use of audiovisual materials in children's programing at the Seattle Public Library prior to 1920. Kinds of materials discussed include pictures, reflectoscopes, films, sound recordings, lantern slides, and stereographs. (17 references) (MES)

  5. Proper Use of Audio-Visual Aids: Essential for Educators.

    ERIC Educational Resources Information Center

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  6. Audiovisual Enhancement of Classroom Teaching: A Primer for Law Professors.

    ERIC Educational Resources Information Center

    Johnson, Vincent Robert

    1987-01-01

    A discussion of audiovisual instruction in the law school classroom looks at the strengths, weaknesses, equipment and facilities needs and hints for classroom use of overhead projection, audiotapes and videotapes, and slides. (MSE)

  7. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  8. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  9. Audio-visual assistance in co-creating transition knowledge

    NASA Astrophysics Data System (ADS)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  10. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  11. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  12. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification. PMID:25415948

  13. [Aesthetics of the grotesque and audiovisual production for health education: segregation or empathy? The case of leishmaniasis in Brazil].

    PubMed

    Pimenta, Denise Nacif; Leandro, Anita; Schall, Virgínia Torres

    2007-05-01

    In order to understand audiovisual production on health and disease and the pedagogical effects of health education mediated by educational videos, this article analyzes the audiovisual production on leishmaniasis in Brazil. Fourteen educational videos showed the hegemony of TV aesthetics, particularly a journalistic paradigm with constant use of voice-over, inducing the fixation of meanings. Rather than stimulating critical reflection on the social circumstances of leishmaniasis, the videos' discourse and images promote a banal, non-critical, stigmatized representation of the disease. Individuals with the disease are subjected to visual exposure rather than being involved critically and sensitively as protagonists in prevention and treatment. The article thus presents approaches based on studies of visual and health anthropology, arguing in favor of an innovative approach to the production and utilization of educational videos in health education, mediated through audiovisuals. Health education should respect and engage in dialogue with various cultures, subjectivity, and citizenship, developing an audiovisual aesthetics (in terms of narrative and image) that fosters an educational praxis in the field of collective health. PMID:17486238

  14. Crossmodal and incremental perception of audiovisual cues to emotional speech.

    PubMed

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: 1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests with video clips of emotional utterances collected via a variant of the well-known Velten method. More specifically, we recorded speakers who displayed positive or negative emotions, which were congruent or incongruent with the (emotional) lexical content of the uttered sentence. In order to test this, we conducted two experiments. The first experiment is a perception experiment in which Czech participants, who do not speak Dutch, rate the perceived emotional state of Dutch speakers in a bimodal (audiovisual) or a unimodal (audio- or vision-only) condition. It was found that incongruent emotional speech leads to significantly more extreme perceived emotion scores than congruent emotional speech, where the difference between congruent and incongruent emotional speech is larger for the negative than for the positive conditions. Interestingly, the largest overall differences between congruent and incongruent emotions were found for the audio-only condition, which suggests that posing an incongruent emotion has a particularly strong effect on the spoken realization of emotions. The second experiment uses a gating paradigm to test the recognition speed for various emotional expressions from a speaker's face. In this experiment participants were presented with the same clips as experiment I, but this time presented vision-only. The clips were shown in successive segments (gates) of increasing duration. Results show that participants are surprisingly accurate in their recognition of the various emotions, as they already reach high recognition scores in the first gate (after only 160 ms). Interestingly, the recognition scores

  15. A Second-Order Implicit Knowledge: Its Implications for E-Learning

    ERIC Educational Resources Information Center

    Noaparast, Khosrow Bagheri

    2014-01-01

    The dichotomous epistemology of explicit/implicit knowledge has led to two parallel lines of research; one putting the emphasis on explicit knowledge which has been the main road of e-learning, and the other taking implicit knowledge as the core of learning which has shaped a critical line to the current e-learning. It is argued in this article…

  16. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or... audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep...

  17. Our nation's wetlands (video). Audio-Visual

    SciTech Connect

    Not Available

    1990-01-01

    The Department of the Interior is custodian of approximately 500 million acres of federally owned land and has an important role to play in the management of wetlands. To contribute to the President's goal of no net loss of America's remaining wetlands, the Department of the Interior has initiated a 3-point program consisting of wetlands protection, restoration, and research: Wetlands Protection--Reduce wetlands losses on federally owned lands and encourage state and private landholders to practice wetlands conservation; Wetlands Restoration--Increase wetlands gains through the restoration and creation of wetlands on both public and private lands; Wetlands Research--Provide a foundation of scientific knowledge to guide future actions and decisions about wetlands. The audiovisual is a slide/tape-to-video transfer illustrating the various ways Interior bureaus are working to preserve our Nation's wetlands. The tape features an introduction by Secretary Manuel Lujan on the importance of wetlands and recognizing the benefit of such programs as the North American Waterfowl Management Program.

  18. Neural circuits in auditory and audiovisual memory.

    PubMed

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26656069

  19. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications... published with grant support and, if feasible, on any publication reporting the results of, or describing, a... under subgrants. (2) Audiovisuals produced as research instruments or for documenting experimentation...

  20. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications... published with grant support and, if feasible, on any publication reporting the results of, or describing, a... under subgrants. (2) Audiovisuals produced as research instruments or for documenting experimentation...

  1. Engineering the path to higher-order thinking in elementary education: A problem-based learning approach for STEM integration

    NASA Astrophysics Data System (ADS)

    Rehmat, Abeera Parvaiz

    As we progress into the 21st century, higher-order thinking skills and achievement in science and math are essential to meet the educational requirement of STEM careers. Educators need to think of innovative ways to engage and prepare students for current and future challenges while cultivating an interest among students in STEM disciplines. An instructional pedagogy that can capture students' attention, support interdisciplinary STEM practices, and foster higher-order thinking skills is problem-based learning. Problem-based learning embedded in the social constructivist view of teaching and learning (Savery & Duffy, 1995) promotes self-regulated learning that is enhanced through exploration, cooperative social activity, and discourse (Fosnot, 1996). This quasi-experimental mixed methods study was conducted with 98 fourth grade students. The study utilized STEM content assessments, a standardized critical thinking test, STEM attitude survey, PBL questionnaire, and field notes from classroom observations to investigate the impact of problem-based learning on students' content knowledge, critical thinking, and their attitude towards STEM. Subsequently, it explored students' experiences of STEM integration in a PBL environment. The quantitative results revealed a significant difference between groups in regards to their content knowledge, critical thinking skills, and STEM attitude. From the qualitative results, three themes emerged: learning approaches, increased interaction, and design and engineering implementation. From the overall data set, students described the PBL environment to be highly interactive that prompted them to employ multiple approaches, including design and engineering to solve the problem.

  2. Website Analysis as a Tool for Task-Based Language Learning and Higher Order Thinking in an EFL Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo

    2014-01-01

    Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…

  3. Order Effects on Neuropsychological Test Performance of Normal, Learning Disabled and Low Functioning Children: A Cross-Cultural Study.

    ERIC Educational Resources Information Center

    Akande, Adebowale

    2000-01-01

    Investigated possible priming effect of two neuropsychological tests, the Booklet Category Test (BCT) and Wisconsin Card Sorting Test (WCST). Obtained counterbalanced order effects on like-aged sample of 63 South African elementary school students (normally- achieving, low-functioning, learning-disabled). Found a significant effect of set-shifting…

  4. The Impact of Learning Driven Constructs on the Perceived Higher Order Cognitive Skills Improvement: Multimedia vs. Text

    ERIC Educational Resources Information Center

    Bagarukayo, Emily; Weide, Theo; Mbarika, Victor; Kim, Min

    2012-01-01

    The study aims at determining the impact of learning driven constructs on Perceived Higher Order Cognitive Skills (HOCS) improvement when using multimedia and text materials. Perceived HOCS improvement is the attainment of HOCS based on the students' perceptions. The research experiment undertaken using a case study was conducted on 223 students…

  5. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  6. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  7. Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing

    PubMed Central

    Hwang, Jaewon

    2015-01-01

    During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614

  8. (Dis)ordering Teacher Education: From Problem Students to Problem-based Learning.

    ERIC Educational Resources Information Center

    Gale, Trevor

    2000-01-01

    Examines how teacher educators should respond to the growing body of student teachers with learning disabilities, focusing on one case, outlining the situation in Australian universities, and questioning the utility of current definitions of learning disabilities and difficulties, suggesting that teacher educators must rethink their approach to…

  9. Granularity and the Acquisition of Grammatical Gender: How Order-of-Acquisition Affects What Gets Learned

    ERIC Educational Resources Information Center

    Arnon, Inbal; Ramscar, Michael

    2012-01-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost…

  10. PBL-GIS in Secondary Geography Education: Does It Result in Higher-Order Learning Outcomes?

    ERIC Educational Resources Information Center

    Liu, Yan; Bui, Elisabeth N.; Chang, Chew-Hung; Lossman, Hans G.

    2010-01-01

    This article presents research on evaluating problem-based learning using GIS technology in a Singapore secondary school. A quasi-experimental research design was carried to test the PBL pedagogy (PBL-GIS) with an experimental group of students and compare their learning outcomes with a control group who were exposed to PBL but not GIS. The…

  11. Ordering Subjects: Actor-Networks and Intellectual Technologies in Lifelong Learning.

    ERIC Educational Resources Information Center

    Edwards, Richard

    2003-01-01

    Argues that discourses of lifelong learning act as intellectual technologies that construct individuals as subjects in a learning society. Discuses three discourses using actor-network theory: (1) economics/human capital (individuals as accumulators of skills for competitiveness); (2) humanistic psychology (individuals seeking fulfilment through…

  12. The Black Record: A Selective Discography of Afro-Americana on Audio Discs Held by the Audio/Visual Department, John M. Olin Library.

    ERIC Educational Resources Information Center

    Dain, Bernice, Comp.; Nevin, David, Comp.

    The present revised and expanded edition of this document is an inclusive cumulation. A few items have been included which are on order as new to the collection or as replacements. This discography is intended to serve primarily as a local user's guide. The call number preceding each entry is based on the Audio-Visual Department's own, unique…

  13. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: a qualitative study

    PubMed Central

    2013-01-01

    Background Computerized Provider Order Entry (CPOE) can improve patient safety, quality and efficiency, but hospitals face a host of barriers to adopting CPOE, ranging from resistance among physicians to the cost of the systems. In response to the incentives for meaningful use of health information technology and other market forces, hospitals in the United States are increasingly moving toward the adoption of CPOE. The purpose of this study was to characterize the experiences of hospitals that have successfully implemented CPOE. Methods We used a qualitative approach to observe clinical activities and capture the experiences of physicians, nurses, pharmacists and administrators at five community hospitals in Massachusetts (USA) that adopted CPOE in the past few years. We conducted formal, structured observations of care processes in diverse inpatient settings within each of the hospitals and completed in-depth, semi-structured interviews with clinicians and staff by telephone. After transcribing the audiorecorded interviews, we analyzed the content of the transcripts iteratively, guided by principles of the Immersion and Crystallization analytic approach. Our objective was to identify attitudes, behaviors and experiences that would constitute useful lessons for other hospitals embarking on CPOE implementation. Results Analysis of observations and interviews resulted in findings about the CPOE implementation process in five domains: governance, preparation, support, perceptions and consequences. Successful institutions implemented clear organizational decision-making mechanisms that involved clinicians (governance). They anticipated the need for education and training of a wide range of users (preparation). These hospitals deployed ample human resources for live, in-person training and support during implementation. Successful implementation hinged on the ability of clinical leaders to address and manage perceptions and the fear of change. Implementation proceeded

  14. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2001-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  15. Promoting Higher Order Thinking Skills via IPTEACES e-Learning Framework in the Learning of Information Systems Units

    ERIC Educational Resources Information Center

    Isaias, Pedro; Issa, Tomayess; Pena, Nuno

    2014-01-01

    When developing and working with various types of devices from a supercomputer to an iPod Mini, it is essential to consider the issues of Human Computer Interaction (HCI) and Usability. Developers and designers must incorporate HCI, Usability and user satisfaction in their design plans to ensure that systems are easy to learn, effective,…

  16. The Effects of Variation on Learning Word Order Rules by Adults with and without Language-Based Learning Disabilities

    ERIC Educational Resources Information Center

    Grunow, Hope; Spaulding, Tammie J.; Gomez, Rebecca L.; Plante, Elena

    2006-01-01

    Non-adjacent dependencies characterize numerous features of English syntax, including certain verb tense structures and subject-verb agreement. This study utilized an artificial language paradigm to examine the contribution of item variability to the learning of these types of dependencies. Adult subjects with and without language-based learning…

  17. Audiovisual non-verbal dynamic faces elicit converging fMRI and ERP responses.

    PubMed

    Brefczynski-Lewis, Julie; Lowitszch, Svenja; Parsons, Michael; Lemieux, Susan; Puce, Aina

    2009-05-01

    In an everyday social interaction we automatically integrate another's facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input-a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion. PMID:19384602

  18. A model-based comparison of three theories of audiovisual temporal recalibration.

    PubMed

    Yarrow, Kielan; Minaei, Shora; Arnold, Derek H

    2015-12-01

    Observers change their audio-visual timing judgements after exposure to asynchronous audiovisual signals. The mechanism underlying this temporal recalibration is currently debated. Three broad explanations have been suggested. According to the first, the time it takes for sensory signals to propagate through the brain has changed. The second explanation suggests that decisional criteria used to interpret signal timing have changed, but not time perception itself. A final possibility is that a population of neurones collectively encode relative times, and that exposure to a repeated timing relationship alters the balance of responses in this population. Here, we simplified each of these explanations to its core features in order to produce three corresponding six-parameter models, which generate contrasting patterns of predictions about how simultaneity judgements should vary across four adaptation conditions: No adaptation, synchronous adaptation, and auditory leading/lagging adaptation. We tested model predictions by fitting data from all four conditions simultaneously, in order to assess which model/explanation best described the complete pattern of results. The latency-shift and criterion-change models were better able to explain results for our sample as a whole. The population-code model did, however, account for improved performance following adaptation to a synchronous adapter, and best described the results of a subset of observers who reported least instances of synchrony. PMID:26545105

  19. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech

    ERIC Educational Resources Information Center

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  20. Media Literacy and Audiovisual Languages: A Case Study from Belgium

    ERIC Educational Resources Information Center

    Van Bauwel, Sofie

    2008-01-01

    This article examines the use of media in the construction of a "new" language for children. We studied how children acquire and use media literacy skills through their engagement in an educational art project. This media literacy project is rooted in the realm of audiovisual media, within which children's sound and visual worlds are the focus of…

  1. Audiovisual Integration in Noise by Children and Adults

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  2. Audio-Visual Training in Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Magnan, Annie; Ecalle, Jean

    2006-01-01

    This study tested the effectiveness of audio-visual training in the discrimination of the phonetic feature of voicing on the recognition of written words by young children deemed to at risk of dyslexia (experiment 1) as well as on dyslexic children's phonological skills (experiment 2). In addition, the third experiment studied the effectiveness of…

  3. Selected Bibliography and Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This guide to resource materials on environmental education is in two sections: 1) Selected Bibliography of Printed Materials, compiled in April, 1970; and, 2) Audio-Visual materials, Films and Filmstrips, compiled in February, 1971. 99 book annotations are given with an indicator of elementary, junior or senior high school levels. Other book…

  4. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  5. Multinational Exchange Mechanisms of Educational Audio-Visual Materials. Appendixes.

    ERIC Educational Resources Information Center

    Center of Studies and Realizations for Permanent Education, Paris (France).

    These appendixes contain detailed information about the existing audiovisual material exchanges which served as the basis for the analysis contained in the companion report. Descriptions of the objectives, structure, financing and services of the following national and international organizations are included: (1) Educational Resources Information…

  6. Selected Audio-Visual Materials for Consumer Education.

    ERIC Educational Resources Information Center

    Oppenheim, Irene

    This monograph provides an annotated listing of suggested audiovisual materials which teachers should consider as they plan consumer education programs. The materials are divided into a general section on consumer education and a section on specific topics, such as credit, decision making, health, insurance, money management, and others. The…

  7. The Audio-Visual Equipment Directory. Seventeenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    The following types of audiovisual equipment are catalogued: 8 mm. and 16 mm. motion picture projectors, filmstrip and sound filmstrip projectors, slide projectors, random access projection equipment, opaque, overhead, and micro-projectors, record players, special purpose projection equipment, audio tape recorders and players, audio tape…

  8. Audio-Visual Equipment Depreciation. RDU-75-07.

    ERIC Educational Resources Information Center

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  9. Audiovisual Market Place 1972-1973. A Multimedia Guide.

    ERIC Educational Resources Information Center

    1972

    The audiovisual (AV) field has been expanding rapidly, although in the last year or so there is evidence of a healthy slowing down in growth. This fourth edition of the guide to the AV industry represents an attempt to keep abreast of the information and to provide a single publication listing the many types of AV organizations and products which…

  10. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  11. Audio-Visual Communications, A Tool for the Professional

    ERIC Educational Resources Information Center

    Journal of Environmental Health, 1976

    1976-01-01

    The manner in which the Cuyahoga County, Ohio Department of Environmental Health utilizes audio-visual presentations for communication with business and industry, professional public health agencies and the general public is presented. Subjects including food sanitation, radiation protection and safety are described. (BT)

  12. Audio-guided audiovisual data segmentation, indexing, and retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-12-01

    While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.

  13. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  14. Audiovisual Aids and Techniques in Managerial and Supervisory Training.

    ERIC Educational Resources Information Center

    Rigg, Robinson P.

    An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…

  15. Searching AVLINE for Curriculum-Related Audiovisual Instructional Materials.

    ERIC Educational Resources Information Center

    Bridgman, Charles F.; Suter, Emanuel

    1979-01-01

    Ways in which the National Library of Medicine's online data file of audiovisual instructional materials (AVLINE) can be searched are described. The search approaches were developed with the assistance of data analysts at NLM trained in reference services. AVLINE design, search strategies, and acquisition of the materials are reported. (LBH)

  16. Guide to Audiovisual Terminology. Product Information Supplement, Number 6.

    ERIC Educational Resources Information Center

    Trzebiatowski, Gregory, Ed.

    1968-01-01

    The terms appearing in this glossary have been specifically selected for use by educators from a larger text, which was prepared by the Commission on Definition and Terminology of the Department of Audiovisual Instruction of the National Education Association. Specialized areas covered in the glossary include audio reproduction, audiovisual…

  17. Learning to represent spatial transformations with factored higher-order Boltzmann machines.

    PubMed

    Memisevic, Roland; Hinton, Geoffrey E

    2010-06-01

    To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans. PMID:20141471

  18. Context-specific effects of musical expertise on audiovisual integration.

    PubMed

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  19. Context-specific effects of musical expertise on audiovisual integration

    PubMed Central

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  20. Vocabulary Level; One Variable Affecting Learning from Audiovisual Media.

    ERIC Educational Resources Information Center

    Lewis, Richard F.

    Vocabulary level of 10 special students was determined and compared to their supposed level of proficiency on the Functional Basic Word List for Special Pupils (Tudyman and Groelle, 1958). Ss were five educable mentally retarded (EMR) students (CA 9-6 to 12-0, IQ 64-77, MA 6-6 to 9-7) and five matched emotionally disturbed students. Word sampling…

  1. Audio/Visual Aids: A Study of the Effect of Audio/Visual Aids on the Comprehension Recall of Students.

    ERIC Educational Resources Information Center

    Bavaro, Sandra

    A study investigated whether the use of audio/visual aids had an effect upon comprehension recall. Thirty fourth-grade students from an urban public school were randomly divided into two equal samples of 15. One group was given a story to read (print only), while the other group viewed a filmstrip of the same story, thereby utilizing audio/visual…

  2. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    NASA Astrophysics Data System (ADS)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  3. Authentic Role-Playing as Situated Learning: Reframing Teacher Education Methodology for Higher-Order Thinking

    ERIC Educational Resources Information Center

    Leaman, Lori Hostetler; Flanagan, Toni Michele

    2013-01-01

    This article draws from situated learning theory, teacher education research, and the authors' collaborative self-study to propose a teacher education pedagogy that may help to bridge the theory-into-practice gap for preservice teachers. First, we review the Interstate Teacher Assessment and Support Consortium standards to confirm the call for…

  4. Complimentary lower-level and higher-order systems underpin imitation learning.

    PubMed

    Andrew, Matthew; Bennett, Simon J; Elliott, Digby; Hayes, Spencer J

    2016-04-01

    We examined whether the temporal representation developed during motor training with reduced-frequency knowledge of results (KR; feedback available on every other trial) was transferred to an imitation learning task. To this end, four groups first practised a three-segment motor sequence task with different KR protocols. Two experimental groups received reduced-frequency KR, one group received high-frequency KR (feedback available on every trial), and one received no-KR. Compared to the no-KR group, the groups that received KR learned the temporal goal of the movement sequence, as evidenced by increased accuracy and consistency across training. Next, all groups learned a single-segment movement that had the same temporal goal as the motor sequence task but required the imitation of biological and nonbiological motion kinematics. Kinematic data showed that whilst all groups imitated biological motion kinematics, the two experimental reduced-frequency KR groups were on average ∼800ms more accurate at imitating movement time than the high-frequency KR and no-KR groups. The interplay between learning biological motion kinematics and the transfer of temporal representation indicates imitation involves distinct, but complementary lower-level sensorimotor and higher-level cognitive processing systems. PMID:26897261

  5. Rhesus Monkeys (Macaca Mulatta) Maintain Learning Set Despite Second-Order Stimulus-Response Spatial Discontiguity

    ERIC Educational Resources Information Center

    Beran, Michael J.; Washburn, David A.; Rumbaugh, Duane M.

    2007-01-01

    In many discrimination-learning tests, spatial separation between stimuli and response loci disrupts performance in rhesus macaques. However, monkeys are unaffected by such stimulus-response spatial discontiguity when responses occur through joystick-based computerized movement of a cursor. To examine this discrepancy, five monkeys were tested on…

  6. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  7. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  8. Classroom Order and Student Learning in Late Elementary School: A Multilevel Transactional Model of Achievement Trajectories

    ERIC Educational Resources Information Center

    Gaskins, Clare S.; Herres, Joanna; Kobak, Roger

    2012-01-01

    This study examines the association between classroom order in 4th and 5th grades and student achievement growth over a school year. A three level transactional model tested the effects of classroom order on students' rates of growth in math and reading during the school year controlling for starting achievement levels, student risk factors, and…

  9. Higher Order Thinking Skills among Secondary School Students in Science Learning

    ERIC Educational Resources Information Center

    Saido, Gulistan Mohammed; Siraj, Saedah; Bin Nordin, Abu Bakar; Al Amedy, Omed Saadallah

    2015-01-01

    A central goal of science education is to help students to develop their higher order thinking skills to enable them to face the challenges of daily life. Enhancing students' higher order thinking skills is the main goal of the Kurdish Science Curriculum in the Iraqi-Kurdistan region. This study aimed at assessing 7th grade students' higher order…

  10. Assessment of Higher Order Thinking Skills. Current Perspectives on Cognition, Learning and Instruction

    ERIC Educational Resources Information Center

    Schraw, Gregory, Ed.; Robinson, Daniel H., Ed.

    2011-01-01

    This volume examines the assessment of higher order thinking skills from the perspectives of applied cognitive psychology and measurement theory. The volume considers a variety of higher order thinking skills, including problem solving, critical thinking, argumentation, decision making, creativity, metacognition, and self-regulation. Fourteen…

  11. Authentic Instruction for 21st Century Learning: Higher Order Thinking in an Inclusive School

    ERIC Educational Resources Information Center

    Preus, Betty

    2012-01-01

    The author studied a public junior high school identified as successfully implementing authentic instruction. Such instruction emphasizes higher order thinking, deep knowledge, substantive conversation, and value beyond school. To determine in what ways higher order thinking was fostered both for students with and without disabilities, the author…

  12. Educating for Identity & Resistance: Situated Learning among the Old Order Mennonites.

    ERIC Educational Resources Information Center

    Cowles, Spencer L.

    An essential aspect of Old Order Mennonite identity is located in the historical-cultural understanding of who they are as one group of God's people. Schooling is an intentional means of reinforcing this understanding, and it is finely tuned to prepare children for the Old Order way of life. As such, it emphasizes basic academics, acquisition of…

  13. Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration.

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Hale, Sandra; Sommers, Mitchell

    2016-06-01

    In this study of visual (V-only) and audiovisual (AV) speech recognition in adults aged 22-92 years, the rate of age-related decrease in V-only performance was more than twice that in AV performance. Both auditory-only (A-only) and V-only performance were significant predictors of AV speech recognition, but age did not account for additional (unique) variance. Blurring the visual speech signal decreased speech recognition, and in AV conditions involving stimuli associated with equivalent unimodal performance for each participant, speech recognition remained constant from 22 to 92 years of age. Finally, principal components analysis revealed separate visual and auditory factors, but no evidence of an AV integration factor. Taken together, these results suggest that the benefit that comes from being able to see as well as hear a talker remains constant throughout adulthood and that changes in this AV advantage are entirely driven by age-related changes in unimodal visual and auditory speech recognition. (PsycINFO Database Record PMID:27294718

  14. An Exploration of the Learning Resources Philosophy and Service Being Developed in the Junior Colleges of Minnesota.

    ERIC Educational Resources Information Center

    Philipson, Willard; And Others

    1968-01-01

    When the junior college develops a Learning Resources Center, the audiovisual program may necessarily be merged with the print program or other needed graphic and media programs. When this happens, it will be necessary to increase personnel (librarians, audiovisual specialists, and aides) to meet the various skill needs. For greatest efficiency, a…

  15. Your Most Essential Audiovisual Aid--Yourself!

    ERIC Educational Resources Information Center

    Hamp-Lyons, Elizabeth

    2012-01-01

    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…

  16. Aging, Audiovisual Integration, and the Principle of Inverse Effectiveness

    PubMed Central

    Tye-Murray, Nancy; Sommers, Mitchell; Spehar, Brent; Myerson, Joel; Hale, Sandra

    2010-01-01

    Objective The purpose of this investigation was to compare the ability of young adults and older adults to integrate auditory and visual sentence materials under conditions of good and poor signal clarity. The Principle of Inverse Effectiveness (PoIE), which characterizes many neuronal and behavioral phenomena related to multisensory integration, asserts that as unimodal performance declines, integration is enhanced. Thus, the PoIE predicts that both young and older adults will show enhanced integration of auditory and visual speech stimuli when these stimuli are degraded. More importantly, because older adults' unimodal speech recognition skills decline in both the auditory and visual domains, the PoIE predicts that older adults will show enhanced integration during audiovisual speech recognition relative to young adults. The present study provides a test of these predictions. Design Fifty-three young and 53 older adults with normal hearing completed the closed-set Build-A-Sentence (BAS) Test and the CUNY Sentence Test in a total of eight conditions, four unimodal and four audiovisual. In the unimodal conditions, stimuli were either auditory or visual and either easier or harder to perceive; the audiovisual conditions were formed from all the combinations of the unimodal signals. The hard visual signals were created by degrading video contrast; the hard auditory signals were created by decreasing the signal-to-noise ratio. Scores from the unimodal and bimodal conditions were used to compute auditory enhancement and integration enhancement measures. Results Contrary to the PoIE, neither the auditory enhancement nor integration enhancement measures increased when signal clarity in the auditory or visual channel of audiovisual speech stimuli was decreased, nor was either measure higher for older adults than for young adults. In audiovisual conditions with easy visual stimuli, the integration enhancement measure for older adults was equivalent to that for young adults

  17. Modulation of neural activity during observational learning of actions and their sequential orders.

    PubMed

    Frey, Scott H; Gerry, Valerie E

    2006-12-20

    How does the brain transform perceptual representations of others' actions into motor representations that can be used to guide behavior? Here we used functional magnetic resonance imaging to record human brain activity while subjects watched others construct multipart objects under varied task demands. We find that relative to resting baseline, passive action observation increases activity within inferior frontal and parietal cortices implicated in action encoding (mirror system) and throughout a distributed network of areas involved in motor representation, including dorsal premotor cortex, pre-supplementary motor area, cerebellum, and basal ganglia (experiments 1 and 2). Relative to passive observation, these same areas show increased activity when subjects observe with the intention to subsequently reproduce component actions using the demonstrated sequential procedures (experiment 1). Observing the same actions with the intention of reproducing component actions, but without the requirement to use the demonstrated sequential procedure, increases activity in the same regions, although to a lesser degree (experiment 2). These findings demonstrate that when attempting to learn behaviors through observation, the observers' intentions modulate responses in a widely distributed network of cortical and subcortical regions implicated previously in action encoding and/or motor representation. Among these regions, only activity within the right intraparietal sulcus predicts the accuracy with which observed procedures are subsequently performed. Successful formation of motor representations of sequential procedures through observational learning is dependent on computations implemented within this parietal region. PMID:17182769

  18. Audiovisual integration of emotional signals from others' social interactions.

    PubMed

    Piwek, Lukasz; Pollick, Frank; Petrini, Karin

    2015-01-01

    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity. PMID:26005430

  19. Audiovisual integration of emotional signals from others' social interactions

    PubMed Central

    Piwek, Lukasz; Pollick, Frank; Petrini, Karin

    2015-01-01

    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity. PMID:26005430

  20. Musical expertise induces audiovisual integration of abstract congruency rules.

    PubMed

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Pantev, Christo

    2012-12-12

    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-"the higher the pitch of the tone, the higher the position of the circle"-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to non-musicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses. PMID:23238733

  1. Temporal Adaptation to Audiovisual Asynchrony Generalizes Across Different Sound Frequencies

    PubMed Central

    Navarra, Jordi; García-Morera, Joel; Spence, Charles

    2012-01-01

    The human brain exhibits a highly adaptive ability to reduce natural asynchronies between visual and auditory signals. Even though this mechanism robustly modulates the subsequent perception of sounds and visual stimuli, it is still unclear how such a temporal realignment is attained. In the present study, we investigated whether or not temporal adaptation generalizes across different auditory frequencies. In a first exposure phase, participants adapted to a fixed 220-ms audiovisual asynchrony or else to synchrony for 3 min. In a second phase, the participants performed simultaneity judgments (SJs) regarding pairs of audiovisual stimuli that were presented at different stimulus onset asynchronies (SOAs) and included either the same tone as in the exposure phase (a 250 Hz beep), another low-pitched beep (300 Hz), or a high-pitched beep (2500 Hz). Temporal realignment was always observed (when comparing SJ performance after exposure to asynchrony vs. synchrony), regardless of the frequency of the sound tested. This suggests that temporal recalibration influences the audiovisual perception of sounds in a frequency non-specific manner and may imply the participation of non-primary perceptual areas of the brain that are not constrained by certain physical features such as sound frequency. PMID:22615705

  2. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  3. Audiovisual integration of speech falters under high attention demands.

    PubMed

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands. PMID:15886102

  4. Audiovisual integration of speech in a patient with Broca's Aphasia

    PubMed Central

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  5. The development of the perception of audiovisual simultaneity.

    PubMed

    Chen, Yi-Chuan; Shore, David I; Lewis, Terri L; Maurer, Daphne

    2016-06-01

    We measured the typical developmental trajectory of the window of audiovisual simultaneity by testing four age groups of children (5, 7, 9, and 11 years) and adults. We presented a visual flash and an auditory noise burst at various stimulus onset asynchronies (SOAs) and asked participants to report whether the two stimuli were presented at the same time. Compared with adults, children aged 5 and 7 years made more simultaneous responses when the SOAs were beyond ± 200 ms but made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity was located at the visual-leading side, as in adults, by 5 years of age, the youngest age tested. However, the window of audiovisual simultaneity became narrower and response errors decreased with age, reaching adult levels by 9 years of age. Experiment 2 ruled out the possibility that the adult-like performance of 9-year-old children was caused by the testing of a wide range of SOAs. Together, the results demonstrate that the adult-like precision of perceiving audiovisual simultaneity is developed by 9 years of age, the youngest age that has been reported to date. PMID:26897264

  6. Audiovisual Delay as a Novel Cue to Visual Distance

    PubMed Central

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R.; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance. PMID:26509795

  7. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    PubMed

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  8. Neural networks learn highly selective representations in order to overcome the superposition catastrophe.

    PubMed

    Bowers, Jeffrey S; Vankov, Ivan I; Damian, Markus F; Davis, Colin J

    2014-04-01

    A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the coactivation of multiple "things" (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the coactivation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to coactivate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex. PMID:24564411

  9. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy.

    PubMed

    Fava, Eswen; Hull, Rachel; Bortfeld, Heather

    2014-01-01

    Initially, infants are capable of discriminating phonetic contrasts across the world's languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech). Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity. PMID:25116572

  10. Problem-Based Learning and Use of Higher-Order Thinking by Emergency Medical Technicians

    ERIC Educational Resources Information Center

    Rosenberger, Paul

    2013-01-01

    Emergency Medical Technicians (EMTs) often handle chaotic life-and-death situations that require higher-order thinking skills. Improving the pass rate of EMT students depends on many factors, including the use of proven and effective teaching methods. Results from recent research about effective teaching have suggested that the instructional…

  11. Promoting Positive Peer Interaction through Cooperative Learning, Community Building, Higher-Order Thinking and Conflict Management.

    ERIC Educational Resources Information Center

    Carlson, Kathryn R.

    Research shows that probable causes for disruptive classroom behavior are broken social bonds, violent environment, stress and conflict, and inadequate curriculum coupled with ineffective teaching methods. This report discusses a program to decrease negative peer interaction in order to improve academic achievement and interpersonal relationships.…

  12. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    PubMed Central

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  13. Simulation of Parkinsonian gait by fusing trunk learned patterns and a lower limb first order model

    NASA Astrophysics Data System (ADS)

    Cárdenas, Luisa; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    Parkinson's disease is a neurodegenerative disorder that progressively affects the movement. Gait analysis is therefore crucial to determine a disease degree as well as to orient the diagnosis. However, gait examination is completely subjective and therefore prone to errors or misinterpretations, even with a great expertise. In addition, the conventional evaluation follows up general gait variables, which amounts to ignore subtle changes that definitely can modify the history of the treatment. This work presents a functional gait model that simulates the center of gravity trajectory (CoG) for different Parkinson disease stages. This model mimics the gait trajectory by coupling two models: a double pendulum (single stance phase) and a spring-mass model (double stance). Realistic simulations for different Parkinson disease stages are then obtained by integrating to the model a set of trunk bending patterns, learned from real patients. The proposed model was compared with the CoG of real Parkinson gaits in stages 2, 3, 4 achieving a correlation coefficient of 0.88, 0.92 and 0.86, respectively.

  14. A Student-Oriented Individualized Learning Program for Calculus at the Community College

    ERIC Educational Resources Information Center

    Blough, David

    1978-01-01

    An instructional program package for individualized learning of calculus is outlined. The program utilizes audio-visual and other instructional techniques and includes topics in limits and continuity, the derivative with applications, and the integral with applications. (MN)

  15. Psychometric testing of the Pecka Grading Rubric for evaluating higher-order thinking in distance learning.

    PubMed

    Pecka, Shannon; Schmid, Kendra; Pozehl, Bunny

    2014-12-01

    This article describes development of the Pecka Grading Rubric (PGR) as a strategy to facilitate and evaluate students' higher-order thinking in discussion boards. The purpose of this study was to describe psychometric properties of the PGR. Rubric reliability was pilot tested on a discussion board assignment used by 15 senior student registered nurse anesthetist enrolled in an Advanced Principles of Anesthesia course. Interrater and intrarater reliabilities were tested using an interclass correlation coefficient (ICC) to evaluate absolute agreement of scoring. Raters gave each category a score, scores of the categories were summed, and a total score was calculated for the entire rubric. Interrater (ICC = 0.939, P < .001) and intrarater (ICC = 0.902 to 0.994, P < .001) reliabilities were excellent for total point scores. A content validity index was used to evaluate content validity. Raters evaluated content validity of each cell of the PGR. The content validity index (0.8-1.0) was acceptable. Known-group validity was evaluated by comparing graduate student registered nurse anesthetists (N = 7) with undergraduate senior nursing students (N = 13). Beginning evidence indicates a valid and reliable instrument that measures higher-order thinking in the student registered nurse anesthetist. PMID:25842643

  16. Informatics in radiology: evaluation of an e-learning platform for teaching medical students competency in ordering radiologic examinations.

    PubMed

    Marshall, Nina L; Spooner, Muirne; Galvin, P Leo; Ti, Joanna P; McElvaney, N Gerald; Lee, Michael J

    2011-01-01

    A preliminary audit of orders for computed tomography was performed to evaluate the typical performance of interns ordering radiologic examinations. According to the audit, the interns showed only minimal improvement after 8 months of work experience. The online radiology ordering module (ROM) program included baseline assessment of student performance (part I), online learning with the ROM (part II), and follow-up assessment of performance with simulated ordering with the ROM (part III). A curriculum blueprint determined the content of the ROM program, with an emphasis on practical issues, including provision of logistic information, clinical details, and safety-related information. Appropriate standards were developed by a committee of experts, and detailed scoring systems were devised for assessment. The ROM program was successful in addressing practical issues in a simulated setting. In the part I assessment, the mean score for noting contraindications for contrast media was 24%; this score increased to 59% in the part III assessment (P = .004). Similarly, notification of methicillin-resistant Staphylococcus aureus status and pregnancy status and provision of referring physician contact information improved significantly. The quality of the clinical notes was stable, with good initial scores. Part III testing showed overall improvement, with the mean score increasing from 61% to 76% (P < .0001). In general, medical students lack the core knowledge that is needed for good-quality ordering of radiology services, and the experience typically afforded to interns does not address this lack of knowledge. The ROM program was a successful intervention that resulted in statistically significant improvements in the quality of radiologic examination orders, particularly with regard to logistic and radiation safety issues. PMID:21775674

  17. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or.... (c) If NARA determines that a USIA audiovisual record prepared for dissemination abroad may...

  18. A Citation Comparison of Sourcebooks for Audiovisuals to AVLINE Records: Access and the Chief Source of Information.

    ERIC Educational Resources Information Center

    Weimer, Katherine Hart

    1994-01-01

    Discusses cataloging audiovisual materials and the concept of chief source of information and describes a study that compared citations from fully cataloged audiovisual records with their corresponding citations from bibliographic sourcebooks, based on records in AVLINE (National Library of Medicine's Audiovisual On-Line Catalog). Examples of…

  19. An Analysis of Audiovisual Machines for Individual Program Presentation. Research Memorandum Number Two.

    ERIC Educational Resources Information Center

    Finn, James D.; Weintraub, Royd

    The Medical Information Project (MIP) purpose to select the right type of audiovisual equipment for communicating new medical information to general practitioners of medicine was hampered by numerous difficulties. There is a lack of uniformity and standardization in audiovisual equipment that amounts to chaos. There is no evaluative literature on…

  20. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false How must agencies manage their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How...

  1. A Team Approach to Developing an Audiovisual Single-Concept Instructional Unit.

    ERIC Educational Resources Information Center

    Brooke, Martha L.; And Others

    1974-01-01

    In 1973, the National Medical Audiovisual Center undertook the production of several audiovisual teaching units, each addressing a single-concept, using a team approach. The production team on the unit "Left Ventricle Catheterization" were a physiologist acting as content specialist, an artist and film producer as production specialist, and an…

  2. THE PSYCHOLOGY OF THE USE OF AUDIO-VISUAL AIDS IN PRIMARY EDUCATION. MONOGRAPHS ON EDUCATION.

    ERIC Educational Resources Information Center

    MIALARET, G.

    THIS DOCUMENT IS INTENDED PRIMARILY FOR TEACHERS OF PSYCHOLOGY AND EDUCATION IN TEACHER TRAINING CENTERS, RESEARCHERS, AND EDUCATORS INTERESTED IN THE EFFECTIVE USE OF AUDIOVISUAL AIDS. NEW TYPES OF PUPIL AND TEACHER BEHAVIOR IN RESPONSE TO NEW AUDIOVISUAL TECHNIQUES ARE EXAMINED. ONLY TECHNIQUES CONSTANTLY AT THE DISPOSAL OF THE CLASSROOM TEACHER…

  3. Transfer from Audiovisual Pretraining to a Continuous Perceptual-Motor Task.

    ERIC Educational Resources Information Center

    Wood, Milton E.; Gerlach, Vernon S.

    A study was devised to develop a method for describing a continuous, complex perceptual-motor task in descrete categories by which subjects could be pretrained through the use of static, programed, audiovisual techniques; to construct an audiovisual training device to provide realistic, programed practice in the stimulus-response events selected…

  4. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    ERIC Educational Resources Information Center

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  5. Women's History in Visual and Audiovisual Education, Where and How To Find it.

    ERIC Educational Resources Information Center

    Butler, Rebecca P.

    This paper briefly describes the author's dissertation research covering the history of women as visual and audiovisual educators (1920-1957), outlining her historical methodology and tracing sources for such research. The methodology used was a discourse analysis of selected audiovisual textbooks and audiotapes of founders in the audiovisual…

  6. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  7. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    ERIC Educational Resources Information Center

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  8. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  9. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false How must agencies manage... RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related records? Each Federal agency must manage its audiovisual, cartographic and related records as required...

  10. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false How must agencies manage... RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related records? Each Federal agency must manage its audiovisual, cartographic and related records as required...

  11. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false How must agencies manage... RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related records? Each Federal agency must manage its audiovisual, cartographic and related records as required...

  12. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true How must agencies manage their... RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related records? Each Federal agency must manage its audiovisual, cartographic and related records as required in...

  13. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    PubMed

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration. PMID:25848682

  14. Audio-Visual Education in Primary Schools: A Curriculum Project in the Netherlands.

    ERIC Educational Resources Information Center

    Ketzer, Jan W.

    1988-01-01

    A media education curriculum developed in the Netherlands is designed to increase the media literacy of children aged 4-12 years by helping them to acquire information and insights into the meaning of mass media; teaching them to produce and use audiovisual materials as a method of expression; and using audiovisual equipment in the classroom. (LRW)

  15. Audiovisual Materials in Archives--A General Picture of Their Role and Function.

    ERIC Educational Resources Information Center

    Booms, Hans

    Delivered on behalf of the International Council of Archives (ICA), this paper briefly discusses the challenge inherent in the processing and preservation of audiovisual materials, the types of media included in the term audiovisual, the concerns of professional archivists, the development and services of archival institutions, the utilization of…

  16. Planning Schools for Use of Audio-Visual Materials. No. 1--Classrooms, 3rd Edition.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    Intended to inform school board administrators and teachers of the current (1958) thinking on audio-visual instruction for use in planning new buildings, purchasing equipment, and planning instruction. Attention is given the problem of overcoming obstacles to the incorporation of audio-visual materials into the curriculum. Discussion includes--(1)…

  17. What Makes the Difference? Teachers Explore What Must be Taught and What Must be Learned in Order to Understand the Particulate Character of Matter

    NASA Astrophysics Data System (ADS)

    Vikström, Anna

    2014-09-01

    The concept of matter, especially its particulate nature, is acknowledged as being one of the key concept areas in learning science. Within the framework of learning studies and variation theory, and with results from science education research as a starting point, six lower secondary school science teachers tried to enhance students' learning by exploring what must be learnt in order to understand the concept in specific way. It was found that variation theory was a useful guiding principle when teachers are engaged in pedagogical design, analysis of lessons, and evaluation of students learning, as well as a valuable tool for adapting research results into practice.

  18. What Makes the Difference? Teachers Explore What Must be Taught and What Must be Learned in Order to Understand the Particulate Character of Matter

    NASA Astrophysics Data System (ADS)

    Vikström, Anna

    2014-10-01

    The concept of matter, especially its particulate nature, is acknowledged as being one of the key concept areas in learning science. Within the framework of learning studies and variation theory, and with results from science education research as a starting point, six lower secondary school science teachers tried to enhance students' learning by exploring what must be learnt in order to understand the concept in specific way. It was found that variation theory was a useful guiding principle when teachers are engaged in pedagogical design, analysis of lessons, and evaluation of students learning, as well as a valuable tool for adapting research results into practice.

  19. The level of audiovisual print-speech integration deficits in dyslexia.

    PubMed

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  20. Seeing and hearing rotated faces: influences of facial orientation on visual and audiovisual speech recognition.

    PubMed

    Jordan, T R; Bevan, K

    1997-04-01

    It is well-known that facial orientation affects the processing of static facial information, but similar effects on the processing of visual speech have yet to be explored fully. Three experiments are reported in which the effects of facial orientation on visual speech processing were examined using a talking face presented at 8 orientations through 360 degrees. Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /ma/, /mi/, /ta/, and /ti/ were used to produce the following speech stimulus types: auditory, visual, congruent audiovisual, and incongruent audiovisual. Facial orientation did not affect identification of visual speed per se or the near-perfect accuracy of auditory speech report with congruent audiovisual speech stimuli. However, facial orientation did affect the accuracy of auditory speech report with incongruent audiovisual speech stimuli. Moreover, the nature of this effect depended on the type of incongruent visual speech used. Implications for the processing of visual and audiovisual speech are discussed. PMID:9104001

  1. Musical expertise is related to altered functional connectivity during audiovisual integration

    PubMed Central

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo

    2015-01-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  2. Emergent Patterns of Teaching/Learning in Electronic Classrooms.

    ERIC Educational Resources Information Center

    Shneiderman, Ben; Borkowski, Ellen Yu; Alavi, Maryam; Norman, Kent

    1998-01-01

    Describes the development and use of electronic classrooms at the University of Maryland College Park. Highlights include active individual learning; small group collaborative learning; class collaborative learning; classroom infrastructure; audio-visual support; courseware; empirical assessments; results of faculty surveys; and student feedback.…

  3. Development of sensitivity to audiovisual temporal asynchrony during midchildhood.

    PubMed

    Kaganovich, Natalya

    2016-02-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7- to 8-year-olds, 10- to 11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether nonverbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2-kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs): 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition), and in the other half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of reaction time (RT) at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10- to 11-year-olds outperforming 7- to 8-year-olds at the 300- to 500-ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function-such as autism, specific language impairment, and dyslexia-may be compared. PMID:26569563

  4. Visual Mislocalization of Moving Objects in an Audiovisual Event

    PubMed Central

    Kawachi, Yousuke

    2016-01-01

    The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD). In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance) were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects’ closest distance biased judgments toward “non-overlapping,” and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments) was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift). These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies. PMID:27111759

  5. Heart House: Where Doctors Learn

    ERIC Educational Resources Information Center

    American School and University, 1978

    1978-01-01

    The new learning center and administrative headquarters of the American College of Cardiology in Bethesda, Maryland, contain a unique classroom equipped with the highly sophisticated audiovisual aids developed to teach the latest techniques in the diagnosis and treatment of heart disease. (Author/MLF)

  6. Mobile Guide System Using Problem-Solving Strategy for Museum Learning: A Sequential Learning Behavioural Pattern Analysis

    ERIC Educational Resources Information Center

    Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.

    2010-01-01

    Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…

  7. Sources of Confusion in Infant Audiovisual Speech Perception Research

    PubMed Central

    Shaw, Kathleen E.; Bortfeld, Heather

    2015-01-01

    Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners—infants—are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding. PMID:26696919

  8. Faculty attitudes toward the use of audiovisuals in continuing education.

    PubMed

    Schindler, M K; Port, J

    1980-11-01

    A study was undertaken in planning for a project involving library support for formal continuing education programs. A questionnaire survey assessed faculty attitudes toward continuing education activities, self-instructional AV programs for continuing education, and self-instructional AV programs for undergraduate medical education. Actual use of AV programs in both undergraduate and postgraduate classroom teaching was also investigated. The results indicated generally positive attitudes regarding a high level of classroom use of AV programs, but little assignment of audiovisuals for self-instruction. PMID:6162840

  9. "Singing in the Tube"--audiovisual assay of plant oil repellent activity against mosquitoes (Culex pipiens).

    PubMed

    Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang

    2016-01-01

    Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses. PMID:26412058

  10. GLOOTT Model: A Pedagogically-Enriched Design Framework of Learning Environment to Improve Higher Order Thinking Skills

    ERIC Educational Resources Information Center

    Tan, Wee Chuen; Aris, Baharuddin; Abu, Mohd Salleh

    2006-01-01

    Learning object design currently leads the instructional technologist towards more effective instructional design, development, and delivery of learning content. There is a considerable amount of literature discussing the potential use of learning object in e-learning. However, most of the works were mainly focused on the standard forms of…

  11. Employing Transformative Learning Theory in the Design and Implementation of a Curriculum for Court-Ordered Participants in a Parent Education Class

    ERIC Educational Resources Information Center

    Taylor, Mariann B.; Hill, Lilian H.

    2016-01-01

    This study sought to analyze the experiences of participants in court-ordered parent education with the ultimate goal to identify a framework, which promotes learning that is transformative. Participants included 11 parents court ordered to attend parent education classes through the Department of Human Services. A basic qualitative design, which…

  12. Using resampling to assess reliability of audio-visual survey strategies for marbled murrelets at inland forest sites

    USGS Publications Warehouse

    Jodice, Patrick G.; Garman, S.L.; Collopy, M.W.

    2001-01-01

    Marbled Murrelets (Brachyramphus marmoratus) are threatened seabirds that nest in coastal old-growth coniferous forests throughout much of their breeding range. Currently, observer-based audio-visual surveys are conducted at inland forest sites during the breeding season primarily to determine nesting distribution and breeding status and are being used to estimate temporal or spatial trends in murrelet detections. Our goal was to assess the feasibility of using audio-visual survey data for such monitoring. We used an intensive field-based survey effort to record daily murrelet detections at seven survey stations in the Oregon Coast Range. We then used computer-aided resampling techniques to assess the effectiveness of twelve survey strategies with varying scheduling and a sampling intensity of 4-14 surveys per breeding season to estimate known means and SDs of murrelet detections. Most survey strategies we tested failed to provide estimates of detection means and SDs that were within A?20% of actual means and SDs. Estimates of daily detections were, however, frequently estimated to within A?50% of field data with sampling efforts of 14 days/breeding season. Additional resampling analyses with statistically generated detection data indicated that the temporal variability in detection data had a great effect on the reliability of the mean and SD estimates calculated from the twelve survey strategies, while the value of the mean had little effect. Effectiveness at estimating multi-year trends in detection data was similarly poor, indicating that audio-visual surveys might be reliably used to estimate annual declines in murrelet detections of the order of 50% per year.

  13. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    PubMed

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952

  14. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  15. Head Tracking of Auditory, Visual, and Audio-Visual Targets

    PubMed Central

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2016-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual “bisensory” stimuli. Three metrics were measured—onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952

  16. Performance and competence models for audiovisual data fusion

    NASA Astrophysics Data System (ADS)

    Kabre, Harouna

    1995-09-01

    We describe two Artificial Neural Network (ANN) Models for Audio-visual Data Fusion. For the first model, we start an ANN training with an a-priori chosen static architecture together with a set of weighting parameters for the visual and for the auditory paths. Those weighting parameters, called attentional parameters, are tuned to achieve best performance even if the acoustic environment changes. This model is called the Performance Model (PM). For the second model, we start without any unit in the hidden layer of the ANN. Then we incrementally add new units which are partially connected to either the visual path or to the auditory one, and we reiterate this procedure until the global error cannot be reduced anymore. This model is called the Competence Model (CM). CM and PM are trained and tested with acoustic data and their corresponding visual parameters (defined as the vertical and the horizontal lip widths and as the lip-opening area parameters) for the audio-visual speech recognition of the 10 French vowels in adverse conditions. In both cases, we note the recognition rate and analyze the complementarity between the visual and the auditory information in terms of number of hidden units (which are connected either to the visual or to the auditory inputs vs Signal To Noise Ratio (SNR)) and in terms of the tuning of the attentional parameters vs SNR.

  17. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    PubMed

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014

  18. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  19. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis

    PubMed Central

    Altieri, Nicholas; Wenger, Michael J.

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. PMID:24058358

  20. Temporal Processing of Audiovisual Stimuli Is Enhanced in Musicians: Evidence from Magnetoencephalography (MEG)

    PubMed Central

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014

  1. Separation of Audio-Visual Speech Sources: A New Approach Exploiting the Audio-Visual Coherence of Speech Stimuli

    NASA Astrophysics Data System (ADS)

    Sodoyer, David; Schwartz, Jean-Luc; Girin, Laurent; Klinkisch, Jacob; Jutten, Christian

    2002-12-01

    We present a new approach to the source separation problem in the case of multiple speech signals. The method is based on the use of automatic lipreading, the objective is to extract an acoustic speech signal from other acoustic signals by exploiting its coherence with the speaker's lip movements. We consider the case of an additive stationary mixture of decorrelated sources, with no further assumptions on independence or non-Gaussian character. Firstly, we present a theoretical framework showing that it is indeed possible to separate a source when some of its spectral characteristics are provided to the system. Then we address the case of audio-visual sources. We show how, if a statistical model of the joint probability of visual and spectral audio input is learnt to quantify the audio-visual coherence, separation can be achieved by maximizing this probability. Finally, we present a number of separation results on a corpus of vowel-plosive-vowel sequences uttered by a single speaker, embedded in a mixture of other voices. We show that separation can be quite good for mixtures of 2, 3, and 5 sources. These results, while very preliminary, are encouraging, and are discussed in respect to their potential complementarity with traditional pure audio separation or enhancement techniques.

  2. Neural substrate for higher-order learning in an insect: Mushroom bodies are necessary for configural discriminations.

    PubMed

    Devaud, Jean-Marc; Papouin, Thomas; Carcaud, Julie; Sandoz, Jean-Christophe; Grünewald, Bernd; Giurfa, Martin

    2015-10-27

    Learning theories distinguish elemental from configural learning based on their different complexity. Although the former relies on simple and unambiguous links between the learned events, the latter deals with ambiguous discriminations in which conjunctive representations of events are learned as being different from their elements. In mammals, configural learning is mediated by brain areas that are either dispensable or partially involved in elemental learning. We studied whether the insect brain follows the same principles and addressed this question in the honey bee, the only insect in which configural learning has been demonstrated. We used a combination of conditioning protocols, disruption of neural activity, and optophysiological recording of olfactory circuits in the bee brain to determine whether mushroom bodies (MBs), brain structures that are essential for memory storage and retrieval, are equally necessary for configural and elemental olfactory learning. We show that bees with anesthetized MBs distinguish odors and learn elemental olfactory discriminations but not configural ones, such as positive and negative patterning. Inhibition of GABAergic signaling in the MB calyces, but not in the lobes, impairs patterning discrimination, thus suggesting a requirement of GABAergic feedback neurons from the lobes to the calyces for nonelemental learning. These results uncover a previously unidentified role for MBs besides memory storage and retrieval: namely, their implication in the acquisition of ambiguous discrimination problems. Thus, in insects as in mammals, specific brain regions are recruited when the ambiguity of learning tasks increases, a fact that reveals similarities in the neural processes underlying the elucidation of ambiguous tasks across species. PMID:26460021

  3. Neural substrate for higher-order learning in an insect: Mushroom bodies are necessary for configural discriminations

    PubMed Central

    Devaud, Jean-Marc; Papouin, Thomas; Carcaud, Julie; Sandoz, Jean-Christophe; Grünewald, Bernd; Giurfa, Martin

    2015-01-01

    Learning theories distinguish elemental from configural learning based on their different complexity. Although the former relies on simple and unambiguous links between the learned events, the latter deals with ambiguous discriminations in which conjunctive representations of events are learned as being different from their elements. In mammals, configural learning is mediated by brain areas that are either dispensable or partially involved in elemental learning. We studied whether the insect brain follows the same principles and addressed this question in the honey bee, the only insect in which configural learning has been demonstrated. We used a combination of conditioning protocols, disruption of neural activity, and optophysiological recording of olfactory circuits in the bee brain to determine whether mushroom bodies (MBs), brain structures that are essential for memory storage and retrieval, are equally necessary for configural and elemental olfactory learning. We show that bees with anesthetized MBs distinguish odors and learn elemental olfactory discriminations but not configural ones, such as positive and negative patterning. Inhibition of GABAergic signaling in the MB calyces, but not in the lobes, impairs patterning discrimination, thus suggesting a requirement of GABAergic feedback neurons from the lobes to the calyces for nonelemental learning. These results uncover a previously unidentified role for MBs besides memory storage and retrieval: namely, their implication in the acquisition of ambiguous discrimination problems. Thus, in insects as in mammals, specific brain regions are recruited when the ambiguity of learning tasks increases, a fact that reveals similarities in the neural processes underlying the elucidation of ambiguous tasks across species. PMID:26460021

  4. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation.

    PubMed

    Lusk, Laina G; Mitchel, Aaron D

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  5. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  6. Mismatch Negativity with Visual-only and Audiovisual Speech

    PubMed Central

    Ponton, Curtis W.; Bernstein, Lynne E.; Auer, Edward T.

    2009-01-01

    The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing. PMID:19404730

  7. Increasing observer objectivity with audio-visual technology: the Sphygmocorder.

    PubMed

    Atkins; O'Brien; Wesseling; Guelen

    1997-10-01

    The most fallible component of blood pressure measurement is the human observer. The traditional technique of measuring blood pressure does not allow the result of the measurement to be checked by independent observers, thereby leaving the method open to bias. In the Sphygmocorder, several components used to measure blood pressure have been combined innovatively with audio-visual recording technology to produce a system consisting of a mercury sphygmomanometer, an occluding cuff, an automatic inflation-deflation source, a stethoscope, a microphone capable of detecting Korotkoff sounds, a camcorder and a display screen. The accuracy of the Sphygmocorder against the trained human observer has been confirmed previously using the protocol of the British Hypertension Society and in this article the updated system incorporating a number of innovations is described. PMID:10234128

  8. Effects of audio-visual stimulation on the incidence of restraint ulcers on the Wistar rat

    NASA Technical Reports Server (NTRS)

    Martin, M. S.; Martin, F.; Lambert, R.

    1979-01-01

    The role of sensory simulation in restrained rats was investigated. Both mixed audio-visual and pure sound stimuli, ineffective in themselves, were found to cause a significant increase in the incidence of restraint ulcers in the Wistar Rat.

  9. The development of sensorimotor influences in the audiovisual speech domain: some critical questions

    PubMed Central

    Guellaï, Bahia; Streri, Arlette; Yeung, H. Henny

    2014-01-01

    Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood. PMID:25147528

  10. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Copyrights COPYRIGHT OFFICE, LIBRARY OF CONGRESS COPYRIGHT OFFICE AND PROCEDURES PREREGISTRATION AND... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under... transmission programs. (1) Library of Congress employees, including Library of Congress contractors,...

  11. Audiovisual emotional processing and neurocognitive functioning in patients with depression.

    PubMed

    Doose-Grünefeld, Sophie; Eickhoff, Simon B; Müller, Veronika I

    2015-01-01

    Alterations in the processing of emotional stimuli (e.g., facial expressions, prosody, music) have repeatedly been reported in patients with major depression. Such impairments may result from the likewise prevalent executive deficits in these patients. However, studies investigating this relationship are rare. Moreover, most studies to date have only assessed impairments in unimodal emotional processing, whereas in real life, emotions are primarily conveyed through more than just one sensory channel. The current study therefore aimed at investigating multi-modal emotional processing in patients with depression and to assess the relationship between emotional and neurocognitive impairments. Fourty one patients suffering from major depression and 41 never-depressed healthy controls participated in an audiovisual (faces-sounds) emotional integration paradigm as well as a neurocognitive test battery. Our results showed that depressed patients were specifically impaired in the processing of positive auditory stimuli as they rated faces significantly more fearful when presented with happy than with neutral sounds. Such an effect was absent in controls. Findings in emotional processing in patients did not correlate with Beck's depression inventory score. Furthermore, neurocognitive findings revealed significant group differences for two of the tests. The effects found in audiovisual emotional processing, however, did not correlate with performance in the neurocognitive tests. In summary, our results underline the diversity of impairments going along with depression and indicate that deficits found for unimodal emotional processing cannot trivially be generalized to deficits in a multi-modal setting. The mechanisms of impairments therefore might be far more complex than previously thought. Our findings furthermore contradict the assumption that emotional processing deficits in major depression are associated with impaired attention or inhibitory functioning. PMID:25688188

  12. Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony.

    PubMed

    Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J

    2015-03-01

    Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. PMID:25505102

  13. Audiovisual emotional processing and neurocognitive functioning in patients with depression

    PubMed Central

    Doose-Grünefeld, Sophie; Eickhoff, Simon B.; Müller, Veronika I.

    2015-01-01

    Alterations in the processing of emotional stimuli (e.g., facial expressions, prosody, music) have repeatedly been reported in patients with major depression. Such impairments may result from the likewise prevalent executive deficits in these patients. However, studies investigating this relationship are rare. Moreover, most studies to date have only assessed impairments in unimodal emotional processing, whereas in real life, emotions are primarily conveyed through more than just one sensory channel. The current study therefore aimed at investigating multi-modal emotional processing in patients with depression and to assess the relationship between emotional and neurocognitive impairments. Fourty one patients suffering from major depression and 41 never-depressed healthy controls participated in an audiovisual (faces-sounds) emotional integration paradigm as well as a neurocognitive test battery. Our results showed that depressed patients were specifically impaired in the processing of positive auditory stimuli as they rated faces significantly more fearful when presented with happy than with neutral sounds. Such an effect was absent in controls. Findings in emotional processing in patients did not correlate with Beck’s depression inventory score. Furthermore, neurocognitive findings revealed significant group differences for two of the tests. The effects found in audiovisual emotional processing, however, did not correlate with performance in the neurocognitive tests. In summary, our results underline the diversity of impairments going along with depression and indicate that deficits found for unimodal emotional processing cannot trivially be generalized to deficits in a multi-modal setting. The mechanisms of impairments therefore might be far more complex than previously thought. Our findings furthermore contradict the assumption that emotional processing deficits in major depression are associated with impaired attention or inhibitory functioning. PMID

  14. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics

    PubMed Central

    Carver, Frederick W.; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-01-01

    Abstract In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships. PMID:25599264

  15. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. PMID:23978654

  16. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    PubMed

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships. PMID:25599264

  17. Improving Individualized Educational Program (IEP) Mathematics Learning Goals for Conceptual Understanding of Order and Equivalence of Fractions

    ERIC Educational Resources Information Center

    Scanlon, Regina M.

    2013-01-01

    The purpose of this Executive Position Paper project was to develop resources for improving Individual Educational Program (IEP) mathematics learning goals for conceptual understanding of fractions for middle school special education students. The investigation surveyed how IEP mathematics learning goals are currently determined and proposed a new…

  18. Implicit Sequence Learning in Dyslexia: A Within-Sequence Comparison of First- and Higher-Order Information

    ERIC Educational Resources Information Center

    Du, Wenchong; Kelly, Steve W.

    2013-01-01

    The present study examines implicit sequence learning in adult dyslexics with a focus on comparing sequence transitions with different statistical complexities. Learning of a 12-item deterministic sequence was assessed in 12 dyslexic and 12 non-dyslexic university students. Both groups showed equivalent standard reaction time increments when the…

  19. Presentation Factors in the Learning of Chinese Characters: The Order and Position of Hanyu Pinyin and English Translations

    ERIC Educational Resources Information Center

    Chung, Kevin K. H.

    2007-01-01

    The influence of different instructional presentations upon meaning and pronunciation acquisition in character learning was examined. High school students learned to identify a series of characters in terms of their associated pinyin and English translation prompts. Acquisition was shown to proceed more rapidly when the Chinese character was…

  20. A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word-Order Universal

    ERIC Educational Resources Information Center

    Culbertson, Jennifer; Smolensky, Paul

    2012-01-01

    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners' input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized…

  1. How I Came to Understand that My Students Would Need Training Wings in Order to Learn to Fly

    ERIC Educational Resources Information Center

    Corrigan, Paul T.

    2011-01-01

    The author began his first year teaching at an open-enrollment university with the belief that "most students can learn to do intellectual work, if they are only given the opportunity." This belief is inspired by the research on teaching and learning and is rooted in the characteristic idealism of teachers. He had seen the principle borne out in…

  2. Theory and Practice: How Filming "Learning in the Real World" Helps Students Make the Connection

    ERIC Educational Resources Information Center

    Commander, Nannette Evans; Ward, Teresa E.; Zabrucky, Karen M.

    2012-01-01

    This article describes an assignment, titled "Learning in the Real World," designed for graduate students in a learning theory course. Students work in small groups to create high quality audio-visual films that present "real learning" through interviews and/or observations of learners. Students select topics relevant to theories we are discussing…

  3. Social Studies: K-9 Supplementary Learning Resources.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg. Curriculum Development Branch.

    This annotated bibliography contains approximately 350 citations of learning resources for the series of K-9 guides designed for the social studies curriculum in Manitoba, Canada (SO 014 225-231). Intended for teachers and students, the bibliography includes listings of guides, manuals, books, booklets, filmstrips, audiovisual kits, cassettes,…

  4. The Learning A-V Awards 1982.

    ERIC Educational Resources Information Center

    Abraham, Lisanne

    1982-01-01

    Teachers, audiovisual librarians, and curriculum specialists evaluated educational films and filmstrips for "Learning" magazine and selected 39 outstanding examples. Their choices are categorized under: (1) language arts/reading; (2) social studies; (3) science; (4) music; (5) values and guidance; (6) health and safety; and (7) computer science.…

  5. Forecasting Urban Water Demand via Machine Learning Methods Coupled with a Bootstrap Rank-Ordered Conditional Mutual Information Input Variable Selection Method

    NASA Astrophysics Data System (ADS)

    Adamowski, J. F.; Quilty, J.; Khalil, B.; Rathinasamy, M.

    2014-12-01

    This paper explores forecasting short-term urban water demand (UWD) (using only historical records) through a variety of machine learning techniques coupled with a novel input variable selection (IVS) procedure. The proposed IVS technique termed, bootstrap rank-ordered conditional mutual information for real-valued signals (brCMIr), is multivariate, nonlinear, nonparametric, and probabilistic. The brCMIr method was tested in a case study using water demand time series for two urban water supply system pressure zones in Ottawa, Canada to select the most important historical records for use with each machine learning technique in order to generate forecasts of average and peak UWD for the respective pressure zones at lead times of 1, 3, and 7 days ahead. All lead time forecasts are computed using Artificial Neural Networks (ANN) as the base model, and are compared with Least Squares Support Vector Regression (LSSVR), as well as a novel machine learning method for UWD forecasting: the Extreme Learning Machine (ELM). Results from one-way analysis of variance (ANOVA) and Tukey Honesty Significance Difference (HSD) tests indicate that the LSSVR and ELM models are the best machine learning techniques to pair with brCMIr. However, ELM has significant computational advantages over LSSVR (and ANN) and provides a new and promising technique to explore in UWD forecasting.

  6. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    PubMed

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. PMID:27392755

  7. The duration of uncertain times: audiovisual information about intervals is integrated in a statistically optimal fashion.

    PubMed

    Hartcher-O'Brien, Jess; Di Luca, Massimiliano; Ernst, Marc O

    2014-01-01

    Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme isstatistically optimal because it theoretically results in unbiased perceptual estimates with the highest precisionpossible.There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time.In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimationof audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies - i.e. consideringthe time of onset/offset ofsignals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates. PMID:24594578

  8. Can personality traits predict pathological responses to audiovisual stimulation?

    PubMed

    Yambe, Tomoyuki; Yoshizawa, Makoto; Fukudo, Shin; Fukuda, Hiroshi; Kawashima, Ryuta; Shizuka, Kazuhiko; Nanka, Shunsuke; Tanaka, Akira; Abe, Ken-ichi; Shouji, Tomonori; Hongo, Michio; Tabayashi, Kouichi; Nitta, Shin-ichi

    2003-10-01

    pathophysiological reaction to the audiovisual stimulations. As for the photo sensitive epilepsy, it was reported to be only 5-10% for all patients. Therefore, 90% or more of the cause could not be determined in patients who started a morbid response. The results in this study suggest that the autonomic function was connected to the mental tendency of the objects. By examining such directivity, it is expected that subjects, which show morbid reaction to an audiovisual stimulation, can be screened beforehand. PMID:14572681

  9. The Audio-Visual Services in Fifteen African Countries. Comparative Study on the Administration of Audio-Visual Services in Advanced and Developing Countries. Part Four. First Edition.

    ERIC Educational Resources Information Center

    Jongbloed, Harry J. L.

    As the fourth part of a comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded study reports on the African countries of Cameroun, Republic of Central Africa, Dahomey, Gabon, Ghana, Kenya, Libya, Mali, Nigeria, Rwanda, Senegal, Swaziland, Tunisia, Upper Volta and Zambia. Information…

  10. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    SciTech Connect

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J. . E-mail: pjkeall@vcu.edu

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.

  11. Higher-order conditioning of taste-odor learning in rats: Evidence for the association between emotional aspects of gustatory information and olfactory information.

    PubMed

    Onuma, Takuya; Sakai, Nobuyuki

    2016-10-01

    Previous studies have shown that rats prefer an odor paired with saccharin solution to an odor paired with quinine solution (taste-odor learning). However, it remains unclear whether the odors are associated with the emotional (i.e., positive and/or negative hedonics) or qualitative (i.e., sweetness and/or bitterness) aspects of gustatory information. This study aimed to examine this question using higher-order conditioning paradigms: second-order conditioning (SOC) and sensory preconditioning (SPC). Adult Wistar rats were divided into SOC and SPC groups. Food flavors, purchased from a Japanese market, such as melon (0.05%), lemon (0.1%), vanilla (0.1%), and almond (0.1%), were randomly used as odors A, B, C, and D for each rat. The SOC group was exposed to 0.005M saccharin solutions with odor A and 0.02M quinine solutions with odor C in the first 5days of learning. Additionally, they were exposed to water with a mixture of odors A and B, and water with a mixture of odors C and D in the next 5days of learning. The order of these two learning sessions was reversed in the SPC group. We hypothesized that if odor was associated with the emotional, or qualitative, aspects of gustatory information, the SOC, or SPC groups, respectively, would prefer odor B to odor D. Our results showed that the SOC group preferred odor B to odor D, whereas the SPC group did not show any such preference. This suggests that odors may be primarily associated with emotion evoked by gustation in taste-odor learning. PMID:27342429

  12. Audiovisual temporal recalibration occurs independently at two different time scales.

    PubMed

    Van der Burg, Erik; Alais, David; Cass, John

    2015-01-01

    Combining signals across the senses improves precision and speed of perception, although this multisensory benefit declines for asynchronous signals. Multisensory events may produce synchronized stimuli at source but asynchronies inevitably arise due to distance, intensity, attention and neural latencies. Temporal recalibration is an adaptive phenomenon that serves to perceptually realign physically asynchronous signals. Recently, it was discovered that temporal recalibration occurs far more rapidly than previously thought and does not require minutes of adaptation. Using a classical audiovisual simultaneity task and a series of brief flashes and tones varying in onset asynchrony, perceived simultaneity on a given trial was found to shift in the direction of the preceding trial's asynchrony. Here we examine whether this inter-trial recalibration reflects the same process as prolonged adaptation by combining both paradigms: participants adapted to a fixed temporal lag for several minutes followed by a rapid series of test trials requiring a synchrony judgment. Interestingly, we find evidence of recalibration from prolonged adaptation and inter-trial recalibration within a single experiment. We show a dissociation in which sustained adaptation produces a large but decaying recalibration effect whilst inter-trial recalibration produces large transient effects whose sign matches that of the previous trial. PMID:26455577

  13. Impact of language on functional connectivity for audiovisual speech integration.

    PubMed

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-Aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl's gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  14. Impact of language on functional connectivity for audiovisual speech integration

    PubMed Central

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  15. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  16. Audio-visual perception system for a humanoid robotic head.

    PubMed

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  17. The influence of task on gaze during audiovisual speech perception

    NASA Astrophysics Data System (ADS)

    Buchan, Julie; Paré, Martin; Yurick, Micheal; Munhall, Kevin

    2001-05-01

    In natural conversation, visual and auditory information about speech not only provide linguistic information but also provide information about the identity and the emotional state of the speaker. Thus, listeners must process a wide range of information in parallel to understand the full meaning in a message. In this series of studies, we examined how different types of visual information conveyed by a speaker's face are processed by measuring the gaze patterns exhibited by subjects watching audiovisual recordings of spoken sentences. In three experiments, subjects were asked to judge the emotion and the identity of the speaker, and to report the words that they heard under different auditory conditions. As in previous studies, eye and mouth regions dominated the distribution of the gaze fixations. It was hypothesized that the eyes would attract more fixations for more social judgment tasks, rather than tasks which rely more on verbal comprehension. Our results support this hypothesis. In addition, the location of gaze on the face did not influence the accuracy of the perception of speech in noise.

  18. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  19. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  20. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    PubMed

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. PMID:26402725

  1. Bibliographic control of audiovisuals: analysis of a cataloging project using OCLC.

    PubMed

    Curtis, J A; Davison, F M

    1985-04-01

    The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal. PMID:2581645

  2. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  3. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    PubMed

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  4. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    PubMed Central

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  5. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed Central

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for “reading” texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the “bottleneck” for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object

  6. The Use of System Thinking Concepts in Order to Assure Continuous Improvement of Project Based Learning Courses

    ERIC Educational Resources Information Center

    Arantes do Amaral, Joao Alberto; Gonçalves, Paulo

    2015-01-01

    This case study describes a continuous improvement experience, conducted from 2002 to 2014 in Sao Paulo, Brazil, within 47 Project-Based Learning MBA courses, involving approximately 1,400 students. The experience report will focus on four themes: (1) understanding the main dynamics present in MBA courses; (2) planning a systemic intervention in…

  7. Mainland China -- An Abacus and the Hoes. Learning Activity Package, Social Studies, Grade 8. [And] Teacher's Guide.

    ERIC Educational Resources Information Center

    Myers, Amy; Kiracofe, Rolland

    Developed for the Carroll County Public Schools, this Learning Activity Package (LAP) for grade 8 offers a way to provide individualized learning about China before the Communists came to power. Learning activities are based on curriculum and audiovisual materials available in the Carroll County Schools. The focus of the unit is on the life styles…

  8. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    PubMed Central

    Alm, Magnus; Behne, Dawn

    2015-01-01

    Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274

  9. Omnidirectional Audio-Visual Talker Localization Based on Dynamic Fusion of Audio-Visual Features Using Validity and Reliability Criteria

    NASA Astrophysics Data System (ADS)

    Denda, Yuki; Nishiura, Takanobu; Yamashita, Yoichi

    This paper proposes a robust omnidirectional audio-visual (AV) talker localizer for AV applications. The proposed localizer consists of two innovations. One of them is robust omnidirectional audio and visual features. The direction of arrival (DOA) estimation using an equilateral triangular microphone array, and human position estimation using an omnidirectional video camera extract the AV features. The other is a dynamic fusion of the AV features. The validity criterion, called the audioor visual-localization counter, validates each audio- or visual-feature. The reliability criterion, called the speech arriving evaluator, acts as a dynamic weight to eliminate any prior statistical properties from its fusion procedure. The proposed localizer can compatibly achieve talker localization in a speech activity and user localization in a non-speech activity under the identical fusion rule. Talker localization experiments were conducted in an actual room to evaluate the effectiveness of the proposed localizer. The results confirmed that the talker localization performance of the proposed AV localizer using the validity and reliability criteria is superior to that of conventional localizers.

  10. Tones and numbers: a combined EEG-MEG study on the effects of musical expertise in magnitude comparisons of audiovisual stimuli.

    PubMed

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Foroglou, Nikolaos; Bamidis, Panagiotis; Pantev, Christo

    2014-11-01

    This study investigated the cortical responses underlying magnitude comparisons of multisensory stimuli and examined the effect that musical expertise has in this process. The comparative judgments were based on a newly learned rule binding the auditory and visual stimuli within the context of magnitude comparisons: "the higher the pitch of the tone, the larger the number presented." The cortical responses were measured by simultaneous MEG\\EEG recordings and a combined source analysis with individualized realistic head models was performed. Musical expertise effects were investigated by comparing musicians to non-musicians. Congruent audiovisual stimuli, corresponding to the newly learned rule, elicited activity in frontotemporal and occipital areas. In contrast, incongruent stimuli activated temporal and parietal regions. Musicians when compared with nonmusicians showed increased differences between congruent and incongruent stimuli in a prefrontal region, thereby indicating that music expertise may affect multisensory comparative judgments within a generalized representation of analog magnitude. PMID:24916460

  11. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode. PMID:14664072

  12. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode. PMID:15694622

  13. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256

  14. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    PubMed

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256

  15. Audiovisual correspondence between musical timbre and visual shapes

    PubMed Central

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre. PMID:24910604

  16. The Development of Audio-Visual Integration for Temporal Judgements

    PubMed Central

    Adams, Wendy J.

    2016-01-01

    Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4–11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality—they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or

  17. The Development of Audio-Visual Integration for Temporal Judgements.

    PubMed

    Adams, Wendy J

    2016-04-01

    Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4-11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality-they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or auditory

  18. Retinotopic effects during spatial audio-visual integration.

    PubMed

    Meienbrock, A; Naumer, M J; Doehrmann, O; Singer, W; Muckli, L

    2007-02-01

    The successful integration of visual and auditory stimuli requires information about whether visual and auditory signals originate from corresponding places in the external world. Here we report crossmodal effects of spatially congruent and incongruent audio-visual (AV) stimulation. Visual and auditory stimuli were presented from one of four horizontal locations in external space. Seven healthy human subjects had to assess the spatial fit of a visual stimulus (i.e. a gray-scaled picture of a cartoon dog) and a simultaneously presented auditory stimulus (i.e. a barking sound). Functional magnetic resonance imaging (fMRI) revealed two distinct networks of cortical regions that processed preferentially either spatially congruent or spatially incongruent AV stimuli. Whereas earlier visual areas responded preferentially to incongruent AV stimulation, higher visual areas of the temporal and parietal cortex (left inferior temporal gyrus [ITG], right posterior superior temporal gyrus/sulcus [pSTG/STS], left intra-parietal sulcus [IPS]) and frontal regions (left pre-central gyrus [PreCG], left dorsolateral pre-frontal cortex [DLPFC]) responded preferentially to congruent AV stimulation. A position-resolved analysis revealed three robust cortical representations for each of the four visual stimulus locations in retinotopic visual regions corresponding to the representation of the horizontal meridian in area V1 and at the dorsal and ventral borders between areas V2 and V3. While these regions of interest (ROIs) did not show any significant effect of spatial congruency, we found subregions within ROIs in the right hemisphere that showed an incongruency effect (i.e. an increased fMRI signal during spatially incongruent compared to congruent AV stimulation). We interpret this finding as a correlate of spatially distributed recurrent feedback during mismatch processing: whenever a spatial mismatch is detected in multisensory regions (such as the IPS), processing resources are re

  19. AUDIOVISUAL RESOURCES ON THE TEACHING PROCESS IN SURGICAL TECHNIQUE

    PubMed Central

    PUPULIM, Guilherme Luiz Lenzi; IORIS, Rafael Augusto; GAMA, Ricardo Ribeiro; RIBAS, Carmen Australia Paredes Marcondes; MALAFAIA, Osvaldo; GAMA, Mirnaluci

    2015-01-01

    Background: The development of didactic means to create opportunities to permit complete and repetitive viewing of surgical procedures is of great importance nowadays due to the increasing difficulty of doing in vivo training. Thus, audiovisual resources favor the maximization of living resources used in education, and minimize problems arising only with verbalism. Aim: To evaluate the use of digital video as a pedagogical strategy in surgical technique teaching in medical education. Methods: Cross-sectional study with 48 students of the third year of medicine, when studying in the surgical technique discipline. They were divided into two groups with 12 in pairs, both subject to the conventional method of teaching, and one of them also exposed to alternative method (video) showing the technical details. All students did phlebotomy in the experimental laboratory, with evaluation and assistance of the teacher/monitor while running. Finally, they answered a self-administered questionnaire related to teaching method when performing the operation. Results: Most of those who did not watch the video took longer time to execute the procedure, did more questions and needed more faculty assistance. The total exposed to video followed the chronology of implementation and approved the new method; 95.83% felt able to repeat the procedure by themselves, and 62.5% of those students that only had the conventional method reported having regular capacity of technique assimilation. In both groups mentioned having regular difficulty, but those who have not seen the video had more difficulty in performing the technique. Conclusion: The traditional method of teaching associated with the video favored the ability to understand and transmitted safety, particularly because it is activity that requires technical skill. The technique with video visualization motivated and arouse interest, facilitated the understanding and memorization of the steps for procedure implementation, benefiting the

  20. Effect of acoustic fine structure cues on the recognition of auditory-only and audiovisual speech.

    PubMed

    Meister, Hartmut; Fuersen, Katrin; Schreitmueller, Stefan; Walger, Martin

    2016-06-01

    This study addressed the hypothesis that an improvement in speech recognition due to combined envelope and fine structure cues is greater in the audiovisual than the auditory modality. Normal hearing listeners were presented with envelope vocoded speech in combination with low-pass filtered speech. The benefit of adding acoustic low-frequency fine structure to acoustic envelope cues was significantly greater for audiovisual than for auditory-only speech. It is suggested that this is due to complementary information of the different acoustic and visual cues. The results have potential implications for the assessment of bimodal cochlear implant fittings or electroacoustic stimulation. PMID:27369134

  1. Do we need to overcome barriers to learning in the workplace for foundation trainees rotating in neurosurgery in order to improve training satisfaction?

    PubMed

    Phan, Pho Nh; Patel, Keyur; Bhavsar, Amar; Acharya, Vikas

    2016-01-01

    Junior doctors go through a challenging transition upon qualification; this repeats every time they start a rotation in a new department. Foundation level doctors (first 2 years postqualification) in neurosurgery are often new to the specialty and face various challenges that may result in significant workplace dissatisfaction. The neurosurgical environment is a clinically demanding area with a high volume of unwell patients and frequent emergencies - this poses various barriers to learning in the workplace for junior doctors. We identify a number of key barriers and review ideas that can be trialed in the department to overcome them. Through an evaluation of current suggestions in the literature, we propose that learning opportunities need to be made explicit to junior doctors in order to encourage them to participate as a member of the team. We consider ideas for adjustments to the induction program and the postgraduate medical curriculum to shift the focus from medical knowledge to improving confidence and clinical skills in newly qualified doctors. Despite being a powerful window for opportunistic learning, the daily ward round is unfortunately not maximized and needs to be more learner focused while maintaining efficiency and time consumption. Finally, we put forward the idea of an open forum where trainees can talk about their learning experiences, identify subjective barriers, and suggest solutions to senior doctors. This would be achieved through departmental faculty development. These interventions are presented within the context of the neurosurgical ward; however, they are transferable and can be adapted in other specialties and departments. PMID:27099543

  2. Do we need to overcome barriers to learning in the workplace for foundation trainees rotating in neurosurgery in order to improve training satisfaction?

    PubMed Central

    Phan, Pho NH; Patel, Keyur; Bhavsar, Amar; Acharya, Vikas

    2016-01-01

    Junior doctors go through a challenging transition upon qualification; this repeats every time they start a rotation in a new department. Foundation level doctors (first 2 years postqualification) in neurosurgery are often new to the specialty and face various challenges that may result in significant workplace dissatisfaction. The neurosurgical environment is a clinically demanding area with a high volume of unwell patients and frequent emergencies – this poses various barriers to learning in the workplace for junior doctors. We identify a number of key barriers and review ideas that can be trialed in the department to overcome them. Through an evaluation of current suggestions in the literature, we propose that learning opportunities need to be made explicit to junior doctors in order to encourage them to participate as a member of the team. We consider ideas for adjustments to the induction program and the postgraduate medical curriculum to shift the focus from medical knowledge to improving confidence and clinical skills in newly qualified doctors. Despite being a powerful window for opportunistic learning, the daily ward round is unfortunately not maximized and needs to be more learner focused while maintaining efficiency and time consumption. Finally, we put forward the idea of an open forum where trainees can talk about their learning experiences, identify subjective barriers, and suggest solutions to senior doctors. This would be achieved through departmental faculty development. These interventions are presented within the context of the neurosurgical ward; however, they are transferable and can be adapted in other specialties and departments. PMID:27099543

  3. Beyond textbook illustrations: Hand-held models of ordered DNA and protein structures as 3D supplements to enhance student learning of helical biopolymers.

    PubMed

    Jittivadhna, Karnyupha; Ruenwongsa, Pintip; Panijpan, Bhinyo

    2010-11-01

    Textbook illustrations of 3D biopolymers on printed paper, regardless of how detailed and colorful, suffer from its two-dimensionality. For beginners, computer screen display of skeletal models of biopolymers and their animation usually does not provide the at-a-glance 3D perception and details, which can be done by good hand-held models. Here, we report a study on how our students learned more from using our ordered DNA and protein models assembled from colored computer-printouts on transparency film sheets that have useful structural details. Our models (reported in BAMBED 2009), having certain distinguished features, helped our students to grasp various aspects of these biopolymers that they usually find difficult. Quantitative and qualitative learning data from this study are reported. PMID:21567863

  4. Audiotactile temporal order judgments.

    PubMed

    Zampini, Massimiliano; Brown, Timothy; Shore, David I; Maravita, Angelo; Röder, Brigitte; Spence, Charles

    2005-03-01

    We report a series of three experiments in which participants made unspeeded 'Which modality came first?' temporal order judgments (TOJs) to pairs of auditory and tactile stimuli presented at varying stimulus onset asynchronies (SOAs) using the method of constant stimuli. The stimuli were presented from either the same or different locations in order to explore the potential effect of redundant spatial information on audiotactile temporal perception. In Experiment 1, the auditory and tactile stimuli had to be separated by nearly 80 ms for inexperienced participants to be able to judge their temporal order accurately (i.e., for the just noticeable difference (JND) to be achieved), no matter whether the stimuli were presented from the same or different spatial positions. More experienced psychophysical observers (Experiment 2) also failed to show any effect of relative spatial position on audiotactile TOJ performance, despite having much lower JNDs (40 ms) overall. A similar pattern of results was found in Experiment 3 when silent electrocutaneous stimulation was used rather than vibrotactile stimulation. Thus, relative spatial position seems to be a less important factor in determining performance for audiotactile TOJ than for other modality pairings (e.g., audiovisual and visuotactile). PMID:15698825

  5. A Methodological Approach to Support Collaborative Media Creation in an E-Learning Higher Education Context

    ERIC Educational Resources Information Center

    Ornellas, Adriana; Muñoz Carril, Pablo César

    2014-01-01

    This article outlines a methodological approach to the creation, production and dissemination of online collaborative audio-visual projects, using new social learning technologies and open-source video tools, which can be applied to any e-learning environment in higher education. The methodology was developed and used to design a course in the…

  6. Individually-Paced Learning in Civil Engineering Technology: An Approach to Mastery.

    ERIC Educational Resources Information Center

    Sharples, D. Kent; And Others

    An individually-paced, open-entry/open-ended mastery learning approach for a state-wide civil engineering technology curriculum was developed, field-tested, and evaluated. Learning modules relying heavily on audiovisuals and hands-on experience, and based on 163 identified competencies, were developed for 11 courses in the curriculum. Written…

  7. Looking Back--A Lesson Learned: From Videotape to Digital Media

    ERIC Educational Resources Information Center

    Lys, Franziska

    2010-01-01

    This paper chronicles the development of Drehort Neubrandenburg Online, an interactive, content-rich audiovisual language learning environment based on documentary film material shot on location in Neubrandenburg, Germany, in 1991 and 2002 and aimed at making language learning more interactive and more real. The paper starts with the description…

  8. Homebound Learning Opportunities: Reaching Out to Older Shut-ins and Their Caregivers.

    ERIC Educational Resources Information Center

    Penning, Margaret; Wasyliw, Douglas

    1992-01-01

    Describes Homebound Learning Opportunities, innovative health promotion and educational outreach service for homebound older adults and their caregivers. Notes that program provides over 125 topics for individualized learning programs delivered to participants in homes, audiovisual lending library, educational television programing, and peer…

  9. Student-Centred Learning: Toolkit for Students, Staff and Higher Education Institutions

    ERIC Educational Resources Information Center

    Attard, Angele; Di Iorio, Emma; Geven, Koen; Santa, Robert

    2010-01-01

    This Toolkit forms part of the project entitled "Time for a New Paradigm in Education: Student-Centred Learning" (T4SCL), jointly led by the European Students' Union (ESU) and Education International (EI). This is an EU-funded project under the Lifelong Learning Programme (LLP) administered by the Education, Audiovisual and Culture Executive…

  10. A possible neurophysiological correlate of audiovisual binding and unbinding in speech perception

    PubMed Central

    Ganesh, Attigodu C.; Berthommier, Frédéric; Vilain, Coriandre; Sato, Marc; Schwartz, Jean-Luc

    2014-01-01

    Audiovisual (AV) speech integration of auditory and visual streams generally ends up in a fusion into a single percept. One classical example is the McGurk effect in which incongruent auditory and visual speech signals may lead to a fused percept different from either visual or auditory inputs. In a previous set of experiments, we showed that if a McGurk stimulus is preceded by an incongruent AV context (composed of incongruent auditory and visual speech materials) the amount of McGurk fusion is largely decreased. We interpreted this result in the framework of a two-stage “binding and fusion” model of AV speech perception, with an early AV binding stage controlling the fusion/decision process and likely to produce “unbinding” with less fusion if the context is incoherent. In order to provide further electrophysiological evidence for this binding/unbinding stage, early auditory evoked N1/P2 responses were here compared during auditory, congruent and incongruent AV speech perception, according to either prior coherent or incoherent AV contexts. Following the coherent context, in line with previous electroencephalographic/magnetoencephalographic studies, visual information in the congruent AV condition was found to modify auditory evoked potentials, with a latency decrease of P2 responses compared to the auditory condition. Importantly, both P2 amplitude and latency in the congruent AV condition increased from the coherent to the incoherent context. Although potential contamination by visual responses from the visual cortex cannot be discarded, our results might provide a possible neurophysiological correlate of early binding/unbinding process applied on AV interactions. PMID:25505438

  11. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    PubMed

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials. PMID:25914939

  12. SURVEY OF AUDIO-VISUAL EDUCATION IN HAWAII--ITS STATUS AND NEEDS.

    ERIC Educational Resources Information Center

    SCHULLER, CHARLES F.; AND OTHERS

    THE PURPOSES OF THE SURVEY WERE (1) TO MAKE AN OBJECTIVE ANALYSIS OF THE AUDIOVISUAL INSTRUCTION NEEDS OF THE PUBLIC EDUCATIONAL SYSTEMS OF THE STATE OF HAWAII, AND (2) TO MAKE SPECIFIC RECOMMENDATIONS AND SUGGESTIONS FOR SHORT AND LONG RANGE IMPROVEMENTS WHERE NEEDED. TOP PRIORITY RECOMMENDATIONS ARE RECORDED, INCLUDING SUGGESTED ALLOCATIONS OF…

  13. Audio-Visual Materials in Adult Consumer Education: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Forgue, Raymond E.; And Others

    Designed to provide a quick but thorough reference for consumer educators of adults to use when choosing audio-visual materials, this annotated bibliography includes eighty-five titles from the currently available 1,500 films, slidesets, cassettes, records, and transparencies. (Materials were rejected because they were out-of-date; not relevant to…

  14. Talker and Lexical Effects on Audiovisual Word Recognition by Adults with Cochlear Implants.

    ERIC Educational Resources Information Center

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2003-01-01

    A study examined how 20 adults with postlingual deafness with cochlear implants combined visual information from lip reading with auditory cues in an open-set word recognition task. Word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation, and for single-talker…

  15. Psychophysics of the McGurk and Other Audiovisual Speech Integration Effects

    ERIC Educational Resources Information Center

    Jiang, Jintao; Bernstein, Lynne E.

    2011-01-01

    When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called "McGurk effect"), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for…

  16. The Use of Audiovisual Resources for Scholarly Research: A Jazz Archive as a Multidiscipline Resource.

    ERIC Educational Resources Information Center

    Griffin, Marie P.

    1985-01-01

    Examination of the jazz archive as a primary resource emphasizes research potential of jazz sound recordings as an example of use of audiovisual materials for scholarly research. Discussion covers field recordings, commercial recordings, noncommercial recordings, archival collections, musicological research, visual resources, audiovisual…

  17. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    ERIC Educational Resources Information Center

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  18. Audiovisuals for Nutrition Education. Nutrition Education Resource Series No. 9. Revised Edition.

    ERIC Educational Resources Information Center

    National Nutrition Education Clearing House, Berkeley, CA.

    This bibliography contains reviews of more than 250 audiovisual materials in eight subject areas related to nutrition: (1) general nutrition; (2) life cycle; (3) diet/health and disease; (4) health and athletics; (5) food - general; (6) food preparation and service; (7) food habits and preferences; and (8) food economics and concerns. Materials…

  19. Audiovisuals for Nutrition Education; Selected Evaluative Reviews from the Journal of Nutrition Education.

    ERIC Educational Resources Information Center

    Rowe, Sue Ellen, Comp.

    Audiovisual materials suitable for the teaching of nutrition are listed. Materials include coloring books, flannelboard stories, games, kits, audiotapes, records, charts, posters, study prints, films, videotapes, filmstrips, slides, and transparencies. Each entry contains bibliographic data, educational level, price and evaluation. Mateiral is…

  20. Audiovisual Records in the National Archives Relating to Black History. Preliminary Draft.

    ERIC Educational Resources Information Center

    Waffen, Leslie; And Others

    A representative selection of the National Archives and Records Services' audiovisual collection relating to black history is presented. The intention is not to provide an exhaustive survey, but rather to indicate the breadth and scope of materials available for study and to suggest areas for concentrated research. The materials include sound…

  1. An Audio-Visual Tutorial Laboratory Program for Introductory Geology. Final Report.

    ERIC Educational Resources Information Center

    Sweet, Walter C.; Bates, Robert L.

    Described is an audio-visual tutorial laboratory program designed to provide a uniform, regularly reinforced, programmatic introduction to a limited set of geologic concepts and features; to provide a sequence of problem-solving exercises on which the student can work as an individual and in which he is required repeatedly to use all elements of…

  2. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ..., and are the parties upon which the amended complaint is to be served: Funai Electric Company, Ltd., 7... COMMISSION Certain Audiovisual Components and Products Containing the Same; Institution of Investigation... products containing the same by reason of infringement of certain claims of U.S. Patent No....

  3. Source Catalog for Audio-Visual Materials in Journalism and Communications Education.

    ERIC Educational Resources Information Center

    Colldeweih, Jack H.; Murray, Michael

    This catalog lists audiovisual materials that some members of the Association for Education in Journalism (AEJ) thought would be useful resources for other AEJ members. The catalog resulted from a questionnaire to which 36 of 190 (19%) AEJ member institutions responded. Part one covers films, filmstrips, slides, transparencies, and visual…

  4. A Portable Presentation Package for Audio-Visual Instruction. Technical Documentary Report.

    ERIC Educational Resources Information Center

    Smith, Edgar A.; And Others

    The Portable Presentation Package is a prototype of an audiovisual equipment package designed to facilitate technical training in remote areas, situations in which written communications are difficult, or in situations requiring rapid presentation of instructional material. The major criteria employed in developing the package were (1) that the…

  5. Audiovisual Education in Primary Schools: A Curriculum Project in the Netherlands.

    ERIC Educational Resources Information Center

    Ketzer, Jan W.

    Audiovisual, or mass media education can play a significant role in children's social, emotional, cognitive, sensory, motor, and creative development. It includes all school activities which teach children to interact with The field includes all school activities which teach children to interact with visualize ideas. Students can be involved in…

  6. Selective Review of the Results of Research on the Use of Audiovisual Media to Teach Adults.

    ERIC Educational Resources Information Center

    Campeau, Peggie L.

    The purpose of this literature review was to summarize results of experimental studies on the instructional effectiveness of audiovisual media in post-secondary education. Studies which met seven major screening criteria were used. A study was generally accepted if it compared performance of experimental and control groups on objective measures of…

  7. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  8. Hotel and Restaurant Management; A Bibliography of Books and Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Malkames, James P.; And Others

    This bibliography represents a collection of 1,300 book volumes and audiovisual materials collected by the Luzerne County Community College Library in support of the college's Hotel and Restaurant Management curriculum. It covers such diverse topics as advertising, business practices, decoration, nutrition, hotel law, insurance landscaping, health…

  9. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space. PMID:26226930

  10. Comparisons of Audio and Audiovisual Measures of Stuttering Frequency and Severity in Preschool-Age Children

    ERIC Educational Resources Information Center

    Rousseau, Isabelle; Onslow, Mark; Packman, Ann; Jones, Mark

    2008-01-01

    Purpose: To determine whether measures of stuttering frequency and measures of overall stuttering severity in preschoolers differ when made from audio-only recordings compared with audiovisual recordings. Method: Four blinded speech-language pathologists who had extensive experience with preschoolers who stutter measured stuttering frequency and…

  11. Project S.E.E. (Sex Equity in Education): Audiovisual Resources.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    Intended for use by students, parents, and educators of students of all ages, this resource list provides an annotated guide to over 120 audiovisual resources that promote sex equity in education. Resources are listed alphabetically and by subject matter. A list of over 50 film distributors and their addresses is also included. Resources include…

  12. Teacher's Guide to Aviation Education Resources. Including: Career Information, Audiovisuals, Publications, Periodicals.

    ERIC Educational Resources Information Center

    Federal Aviation Administration (DOT), Washington, DC. Office of Public Affairs.

    Currently available aviation education resource materials are listed alphabetically by title under four headings: (1) career information; (2) audiovisual materials; (3) publications; and (4) periodicals. Each entry includes: title; format (16mm film, slides, slide/tape presentation, VHS/Beta videotape, book, booklet, newsletter, pamphlet, poster,…

  13. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false What are special... unaltered copy of each version for record purposes. (d) Link audiovisual records with their finding aids... elements (e.g., photographic prints and negatives, or original edited masters and dubbing for video...

  14. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    PubMed

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation. PMID:26283946

  15. The Use of Video as an Audio-visual Material in Foreign Language Teaching Classroom

    ERIC Educational Resources Information Center

    Cakir, Ismail

    2006-01-01

    In recent years, a great tendency towards the use of technology and its integration into the curriculum has gained a great importance. Particularly, the use of video as an audio-visual material in foreign language teaching classrooms has grown rapidly because of the increasing emphasis on communicative techniques, and it is obvious that the use of…

  16. The Use of Video as an Audio-Visual Material in Foreign Language Teaching Classroom

    ERIC Educational Resources Information Center

    Çakir, Ismail

    2006-01-01

    In recent years, a great tendency towards the use of technology and its integration into the curriculum has gained a great importance. Particularly, the use of video as an audio-visual material in foreign language teaching classrooms has grown rapidly because of the increasing emphasis on communicative techniques, and it is obvious that the use of…

  17. Technical Considerations in the Delivery of Audio-Visual Course Content.

    ERIC Educational Resources Information Center

    Lightfoot, Jay M.

    2002-01-01

    In an attempt to provide students with the benefit of the latest technology, some instructors include multimedia content on their class Web sites. This article introduces the basic terms and concepts needed to understand the multimedia domain. Provides a brief tutorial designed to help instructors create good, consistent audio-visual content. (AEF)

  18. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    ERIC Educational Resources Information Center

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  19. Catalogo de peliculas educativas y otros materiales audiovisuales (Catalogue of Educational Films and other Audiovisual Materials).

    ERIC Educational Resources Information Center

    Encyclopaedia Britannica, Inc., Chicago, IL.

    This catalogue of educational films and other audiovisual materials consists predominantly of films in Spanish and English which are intended for use in elementary and secondary schools. A wide variety of topics including films for social studies, language arts, humanities, physical and natural sciences, safety and health, agriculture, physical…

  20. Terminological Control of "Anonymous Groups" for Catalogues of Audiovisual Television Documents

    ERIC Educational Resources Information Center

    Caldera-Serrano, Jorge

    2006-01-01

    This article discusses the exceptional nature of the description of moving images for television archives, deriving from their audiovisual nature, and of the specifications in the queries of journalists as users of the Document Information System. It is suggested that there is a need to control completely "Anonymous Groups"--groups without any…

  1. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... in a manner that is clearly audible. If an advertisement has both a visual and an audio component... and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and...

  2. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or... records (e.g., for digital files, use file naming conventions), that clarify connections between related... audio recordings), and that associate records with the relevant creating, sponsoring, or...

  3. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or... records (e.g., for digital files, use file naming conventions), that clarify connections between related... audio recordings), and that associate records with the relevant creating, sponsoring, or...

  4. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or... records (e.g., for digital files, use file naming conventions), that clarify connections between related... audio recordings), and that associate records with the relevant creating, sponsoring, or...

  5. Effects of Audio-Visual Information on the Intelligibility of Alaryngeal Speech

    ERIC Educational Resources Information Center

    Evitts, Paul M.; Portugal, Lindsay; Van Dine, Ami; Holler, Aline

    2010-01-01

    Background: There is minimal research on the contribution of visual information on speech intelligibility for individuals with a laryngectomy (IWL). Aims: The purpose of this project was to determine the effects of mode of presentation (audio-only, audio-visual) on alaryngeal speech intelligibility. Method: Twenty-three naive listeners were…

  6. Modular Audio-Visual Multimedia Programming Concept; Electronic Blueprint Reading. Study Report 1.

    ERIC Educational Resources Information Center

    Suchesk, Arthur M.

    The concept of Modular Audiovisual Multimedia Programing, which is generally applicable to meeting the need for automated mass training, has been implemented in an electronic blueprint reading course for industrial employees. A preliminary study revealed that the average prospective student was 25 to 35 years old, limited to a high school…

  7. Strategies for Media Literacy: Audiovisual Skills and the Citizenship in Andalusia

    ERIC Educational Resources Information Center

    Aguaded-Gomez, Ignacio; Perez-Rodriguez, M. Amor

    2012-01-01

    Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today's digital society (society-network), where information and communication…

  8. Media: An Annotated Catalogue of Law-Related Audio-Visual Materials. Working Notes No. 8.

    ERIC Educational Resources Information Center

    Davison, Susan E., Ed.

    Over 400 films, filmstrips, audio cassettes, video tapes, and mixed media kits are described in this annotated catalogue of law-related materials for elementary and secondary education. The catalogue is divided into seven major content areas. Part one lists the audiovisual materials that deal with the history of the U.S. system of law, as well as…

  9. Storage, Handling and Preservation of Audiovisual Materials. AV in Action 3.

    ERIC Educational Resources Information Center

    Thompson, Anthony Hugh

    Designed to provide the librarian with suggestions and guidelines for storing and preserving audiovisual materials, this pamphlet is divided into four major chapters: (1) Normal Use Storage Conditions; (2) Natural Lifetime, Working Lifetime and Long-Term Storage; (3) Handling; and (4) Shelving of Normal Use Materials. Topics addressed include:…

  10. Anglo-American Cataloging Rules. Chapter Twelve, Revised. Audiovisual Media and Special Instructional Materials.

    ERIC Educational Resources Information Center

    American Library Association, Chicago, IL.

    Chapter 12 of the Anglo-American Cataloging Rules has been revised to provide rules for works in the principal audiovisual media (motion pictures, filmstrips, videorecordings, slides, and transparencies) as well as instructional aids (charts, dioramas, flash cards, games, kits, microscope slides, models, and realia). The rules for main and added…

  11. The Education, Audiovisual and Culture Executive Agency: Helping You Grow Your Project

    ERIC Educational Resources Information Center

    Education, Audiovisual and Culture Executive Agency, European Commission, 2011

    2011-01-01

    The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…

  12. A Working Bibliography of Commercially Available Audio-Visual Materials for the Teaching of Library Science.

    ERIC Educational Resources Information Center

    Lieberman, Irving

    Commercially available audiovisual materials to be used in conjuncton with the teaching of library science is the subject of this selective, non-evaluative bibliography. Unless otherwise specified, the entries are films. The citations were found in various general bibliographies. Annotations are presented as given in the various bibliographic…

  13. Evaluation of an audiovisual-FM system: speechreading performance as a function of distance.

    PubMed

    Gagné, Jean-Pierre; Charest, Monique; Le Monday, K; Desbiens, C

    2006-05-01

    A research program was undertaken to evaluate the efficacy of an audiovisual-FM system as a speechreading aid. The present study investigated the effects of the distance between the talker and the speechreader on a visual-speech perception task. Sentences were recorded simultaneously with a conventional Hi8 mm video camera, and with the microcamera of an audiovisual-FM system. The recordings were obtained from two talkers at three different distances: 1.83 m, 3.66 m, and 7.32 m. Sixteen subjects completed a visual-keyword recognition task. The main results of the investigation were as follows: For the recordings obtained with the conventional video camera, there was a significant decrease in speechreading performance as the distance between the talker and the camera increased. For the recordings obtained with the microcamera of the audiovisual-FM system, there were no differences in speechreading as a function of the test distances. The findings of the investigation confirm that in a classroom setting the use of an audiovisual-FM system may constitute an effective way of overcoming the deleterious effects of distance on speechreading performance. PMID:16717020

  14. The Audio-Visual Equipment Directory. 1973-74, Nineteenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    The directory lists 1,715 products of both members and non-members of the National Audio-Visual Association alphabetically by company name under product type. Each description shows a picture of the product and gives specifications, other models, and a price. Appended are an audience capacity chart, an index of 16mm projector models, a list of…

  15. A Study to Formulate Quantitative Guidelines for the Audio-Visual Communications Field. Final Report.

    ERIC Educational Resources Information Center

    Faris, Gene; Sherman, Mendel

    Quantitative guidelines for use in determining the audiovisual (AV) needs of educational institutions were developed by the Octobe r 14-16, 1965 Seminar of the NDEA (National Defense Education Act), Faris-Sherman study. The guidelines that emerged were based in part on a review of past efforts and existing standards but primarily reflected the…

  16. QUANTITATIVE STANDARDS FOR AUDIOVISUAL PERSONNEL, EQUIPMENT AND MATERIALS (IN ELEMENTARY, SECONDARY, AND HIGHER EDUCATION).

    ERIC Educational Resources Information Center

    COBUN, TED; AND OTHERS

    THIS DOCUMENT IS A STAGE IN A STUDY TO FORMULATE QUANTITATIVE GUIDELINES FOR THE AUDIO-VISUAL COMMUNICATIONS FIELD, BEING CONDUCTED BY DOCTORS GENE FARIS AND MENDEL SHERMAN UNDER A NATIONAL DEFENSE EDUCATION ACT CONTRACT. THE STANDARDS LISTED HERE HAVE BEEN OFFICIALLY APPROVED AND ADOPTED BY SEVERAL AGENCIES, INCLUDING THE DEPARTMENT OF…

  17. Temporal Interval Discrimination Thresholds Depend on Perceived Synchrony for Audio-Visual Stimulus Pairs

    ERIC Educational Resources Information Center

    van Eijk, Rob L. J.; Kohlrausch, Armin; Juola, James F.; van de Par, Steven

    2009-01-01

    Audio-visual stimulus pairs presented at various relative delays, are commonly judged as being "synchronous" over a range of delays from about -50 ms (audio leading) to +150 ms (video leading). The center of this range is an estimate of the point of subjective simultaneity (PSS). The judgment boundaries, where "synchronous" judgments yield to a…

  18. AUDIOVISUAL MEDIA IN THE PUBLIC SCHOOLS, 1961-64--A PROFILE OF CHANGE.

    ERIC Educational Resources Information Center

    GODFREY, ELEANOR P.; AND OTHERS

    A FOLLOWUP SURVEY WAS MADE OF 238 SCHOOL DISTRICTS OF VARIOUS SIZES ACROSS THE UNITED STATES TO SURVEY CHANGES IN THE USE OF AUDIOVISUAL RESOURCES OVER AN INTERVENING 3-YEAR PERIOD. THE SURVEY EXAMINED NOT ONLY THE EXTENT AND DURATION OF CHANGE, BUT ALSO THE IMPACT OF VARIOUS SCHOOL DISTRICT CHARACTERISTICS ON CHANGE. FROM QUESTIONNAIRES ADDRESSED…

  19. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation

    PubMed Central

    Banks, Briony; Gowen, Emma; Munro, Kevin J.; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker’s facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants’ eye gaze was recorded to verify that they looked at the speaker’s face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation. PMID:26283946

  20. Nutrition Education Materials and Audiovisuals for Grades Preschool through 6. Special Reference Briefs Series.

    ERIC Educational Resources Information Center

    Evans, Shirley King, Comp.

    This bibliography was prepared for educators interested in nutrition education materials, audiovisuals, and resources for classroom use. Items listed cover a range of topics including general nutrition, food preparation, food science, and dietary management. Teaching materials listed include food models, games, kits, videocassettes, and lesson…

  1. AUDIOVISUAL MATERIALS, THEIR NATURE AND USE. FOURTH EDITION. EXPLORATION SERIES IN EDUCATION.

    ERIC Educational Resources Information Center

    SCHULLER, CHARLES FRANCIS; WITTICH, WALTER ARNO

    THIS BOOK IS DESIGNED TO ACQUAINT PRESENT AND PROSPECTIVE TEACHERS WITH AUDIOVISUAL CLASSROOM TEACHING MATERIALS. DETAILS OF EQUIPMENT SELECTION AND UTILIZATION ARE GIVEN. SPECIFIC CHAPTERS ARE DEVOTED TO THE CHALKBOARD, FLAT PICTURES, GRAPHICS, 3-DIMENSIONAL TEACHING MATERIALS, STUDY DISPLAY, COMMUNITY STUDY, MAPS AND GLOBES, AUDIOLEARNING, TAPE…

  2. Automated Apprenticeship Training (AAT). A Systematized Audio-Visual Approach to Self-Paced Job Training.

    ERIC Educational Resources Information Center

    Pieper, William J.; And Others

    Two Automated Apprenticeship Training (AAT) courses were developed for Air Force Security Police Law Enforcement and Security specialists. The AAT was a systematized audio-visual approach to self-paced job training employing an easily operated teaching device. AAT courses were job specific and based on a behavioral task analysis of the two…

  3. Audiovisual Material as Educational Innovation Strategy to Reduce Anxiety Response in Students of Human Anatomy

    ERIC Educational Resources Information Center

    Casado, Maria Isabel; Castano, Gloria; Arraez-Aybar, Luis Alfonso

    2012-01-01

    This study presents the design, effect and utility of using audiovisual material containing real images of dissected human cadavers as an innovative educational strategy (IES) in the teaching of Human Anatomy. The goal is to familiarize students with the practice of dissection and to transmit the importance and necessity of this discipline, while…

  4. Non-Commercial Audiovisual Instructional Materials in Japan. AVE in Japan No. 24.

    ERIC Educational Resources Information Center

    Takakuwa, Yasuo

    This report outlines the history of non-commercial and local production of audiovisual instructional materials in Japan since World War II, discusses current trends in instructional materials usage, and presents four case studies of materials production at the prefectural level. Topics addressed include: (1) materials production prior to the…

  5. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music.

    PubMed

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech. PMID:25147539

  6. Audiovisual Translation and Assistive Technology: Towards a Universal Design Approach for Online Education

    ERIC Educational Resources Information Center

    Patiniotaki, Emmanouela

    2016-01-01

    Audiovisual Translation (AVT) and Assistive Technology (AST) are two fields that share common grounds within accessibility-related research, yet they are rarely studied in combination. The reason most often lies in the fact that they have emerged from different disciplines, i.e. Translation Studies and Computer Science, making a possible combined…

  7. MushyPeek: A Framework for Online Investigation of Audiovisual Dialogue Phenomena

    ERIC Educational Resources Information Center

    Edlund, Jens; Beskow, Jonas

    2009-01-01

    Evaluation of methods and techniques for conversational and multimodal spoken dialogue systems is complex, as is gathering data for the modeling and tuning of such techniques. This article describes MushyPeek, an experiment framework that allows us to manipulate the audiovisual behavior of interlocutors in a setting similar to face-to-face…

  8. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    ERIC Educational Resources Information Center

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  9. Learning about the Unfairgrounds: A 4th-Grade Teacher Introduces Her Students to Executive Order 9066

    ERIC Educational Resources Information Center

    Baydo-Reed, Katie

    2010-01-01

    Following the bombing of Pearl Harbor on Dec. 7, 1941, U.S. officials issued a series of proclamations that violated the civil and human rights of the vast majority of Japanese Americans in the United States--ostensibly to protect the nation from further Japanese aggression. The proclamations culminated in Executive Order 9066, which gave the…

  10. Learning Center Guide; Helene Fuld School of Nursing.

    ERIC Educational Resources Information Center

    Rabkin, Frieda H.

    For students at the Helene Fuld School of Nursing, Brooklyn, New York, a guide is provided to services of the school Learning Center. Noncirculating materials are listed and described, including reference books, reserve materials, magazines, the vertical file, and audiovisuals. Borrowing rules and fines are discussed. A guide is provided to the…

  11. Teaching and Learning with Hypervideo in Vocational Education and Training

    ERIC Educational Resources Information Center

    Cattaneo, Alberto A. P.; Nguyen, Anh Thu; Aprea, Carmela

    2016-01-01

    Audiovisuals offer increasing opportunities as teaching-and-learning materials while also confronting educators with significant challenges. Hypervideo provides one means of overcoming these challenges, offering new possibilities for interaction and support for reflective processes. However, few studies have investigated the instructional…

  12. Resource Based Learning: An Experience in Planning and Production.

    ERIC Educational Resources Information Center

    McAleese, Ray; Scobbie, John

    A 2-year project at the University of Aberdeen focused on the production of learning materials and the planning of audiovisual based instruction. Background information on the project examines its origins, the nature of course teams, and the evaluation of the five text-tape programs produced. The report specifies three project aims: (1) to produce…

  13. Learning Disabilities and the Auditory and Visual Matching Computer Program

    ERIC Educational Resources Information Center

    Tormanen, Minna R. K.; Takala, Marjatta; Sajaniemi, Nina

    2008-01-01

    This study examined whether audiovisual computer training without linguistic material had a remedial effect on different learning disabilities, like dyslexia and ADD (Attention Deficit Disorder). This study applied a pre-test-intervention-post-test design with students (N = 62) between the ages of 7 and 19. The computer training lasted eight weeks…

  14. Project Report ECLIPSE: European Citizenship Learning Program for Secondary Education

    ERIC Educational Resources Information Center

    Bombardelli, Olga

    2014-01-01

    This paper reports on a European project, the Comenius ECLIPSE project (European Citizenship Learning in a Programme for Secondary Education) developed by six European partners coordinated by the University of Trento in the years 2011-2014. ECLIPSE (co-financed by the EACEA--Education, Audiovisual and Culture Executive Agency) aims at developing,…

  15. Federal Audiovisual Policy Act. Hearing before a Subcommittee of the Committee on Government Operations, House of Representatives, Ninety-Eighth Congress, Second Session on H.R. 3325 to Establish in the Office of Management and Budget an Office to Be Known as the Office of Federal Audiovisual Policy, and for Other Purposes.

    ERIC Educational Resources Information Center

    Congress of the U. S., Washington, DC. House Committee on Government Operations.

    The views of private industry and government are offered in this report of a hearing on the Federal Audiovisual Policy Act, which would establish an office to coordinate federal audiovisual activity and require most audiovisual material produced for federal agencies to be acquired under contract from private producers. Testimony is included from…

  16. Audio-visual stimulation improves oculomotor patterns in patients with hemianopia.

    PubMed

    Passamonti, Claudia; Bertini, Caterina; Làdavas, Elisabetta

    2009-01-01

    Patients with visual field disorders often exhibit impairments in visual exploration and a typical defective oculomotor scanning behaviour. Recent evidence [Bolognini, N., Rasi, F., Coccia, M., & Làdavas, E. (2005b). Visual search improvement in hemianopic patients after audio-visual stimulation. Brain, 128, 2830-2842] suggests that systematic audio-visual stimulation of the blind hemifield can improve accuracy and search times in visual exploration, probably due to the stimulation of Superior Colliculus (SC), an important multisensory structure involved in both the initiation and execution of saccades. The aim of the present study is to verify this hypothesis by studying the effects of multisensory training on oculomotor scanning behaviour. Oculomotor responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of 12 patients with chronic visual field defects and 12 controls subjects. Eye movements were recorded using an infra-red technique which measured a range of spatial and temporal variables. Prior to treatment, patients' performance was significantly different from that of controls in relation to fixations and saccade parameters; after Audio-Visual Training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Overall, these improvements led to a reduction of total exploration time. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in both left- and right-hemianopia readers. Our findings provide evidence that Audio-Visual Training, by stimulating the SC, may induce a more organized pattern of visual exploration due to an implementation of efficient oculomotor strategies. Interestingly, the improvement was found to be stable at a 1 year follow-up control session, indicating a long

  17. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory

    PubMed Central

    Hwang, Jaewon; Romanski, Lizabeth M.

    2015-01-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. SIGNIFICANCE STATEMENT The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and

  18. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    PubMed

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. PMID:26302946

  19. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). PMID:24274986

  20. A computerized cataloging management system for health science audiovisuals.

    PubMed

    Metz, K S; Calhoun, J G; Hull, A L

    1981-10-01

    This paper describes the implementation of the Stanford Public Information and Retrieval System (SPIRES) by the University of Michigan Medical School Learning Resources Center. SPIRES is a bibliographic data base management system which offers on-line search capabilities and retrieval of data in programmable formats. The Learning Resource Center utilizes SPIRES for the interactive retrieval of cataloging data, bibliographical compilations, and book catalog production. PMID:6170373

  1. Audio-Visual Stimulation in Conjunction with Functional Electrical Stimulation to Address Upper Limb and Lower Limb Movement Disorder.

    PubMed

    Kumar, Deepesh; Verma, Sunny; Bhattacharya, Sutapa; Lahiri, Uttama

    2016-06-13

    Neurological disorders often manifest themselves in the form of movement deficit on the part of the patient. Conventional rehabilitation often used to address these deficits, though powerful are often monotonous in nature. Adequate audio-visual stimulation can prove to be motivational. In the research presented here we indicate the applicability of audio-visual stimulation to rehabilitation exercises to address at least some of the movement deficits for upper and lower limbs. Added to the audio-visual stimulation, we also use Functional Electrical Stimulation (FES). In our presented research we also show the applicability of FES in conjunction with audio-visual stimulation delivered through VR-based platform for grasping skills of patients with movement disorder. PMID:27478568

  2. Audio-Visual Stimulation in Conjunction with Functional Electrical Stimulation to Address Upper Limb and Lower Limb Movement Disorder

    PubMed Central

    Kumar, Deepesh; Verma, Sunny; Bhattacharya, Sutapa; Lahiri, Uttama

    2016-01-01

    Neurological disorders often manifest themselves in the form of movement deficit on the part of the patient. Conventional rehabilitation often used to address these deficits, though powerful are often monotonous in nature. Adequate audio-visual stimulation can prove to be motivational. In the research presented here we indicate the applicability of audio-visual stimulation to rehabilitation exercises to address at least some of the movement deficits for upper and lower limbs. Added to the audio-visual stimulation, we also use Functional Electrical Stimulation (FES). In our presented research we also show the applicability of FES in conjunction with audio-visual stimulation delivered through VR-based platform for grasping skills of patients with movement disorder. PMID:27478568

  3. Use of Audiovisual Information in Speech Perception by Prelingually Deaf Children with Cochlear Implants: A First Report

    PubMed Central

    Lachs, Lorin; Pisoni, David B.; Kirk, Karen Iler

    2012-01-01

    Objective Although there has been a great deal of recent empirical work and new theoretical interest in audiovisual speech perception in both normal-hearing and hearing-impaired adults, relatively little is known about the development of these abilities and skills in deaf children with cochlear implants. This study examined how prelingually deafened children combine visual information available in the talker’s face with auditory speech cues provided by their cochlear implants to enhance spoken language comprehension. Design Twenty-seven hearing-impaired children who use cochlear implants identified spoken sentences presented under auditory-alone and audiovisual conditions. Five additional measures of spoken word recognition performance were used to assess auditory-alone speech perception skills. A measure of speech intelligibility was also obtained to assess the speech production abilities of these children. Results A measure of audiovisual gain, “Ra,” was computed using sentence recognition scores in auditory-alone and audiovisual conditions. Another measure of audiovisual gain, “Rv,” was computed using scores in visual-alone and audiovisual conditions. The results indicated that children who were better at recognizing isolated spoken words through listening alone were also better at combining the complementary sensory information about speech articulation available under audiovisual stimulation. In addition, we found that children who received more benefit from audiovisual presentation also produced more intelligible speech, suggesting a close link between speech perception and production and a common underlying linguistic basis for audiovisual enhancement effects. Finally, an examination of the distribution of children enrolled in Oral Communication (OC) and Total Communication (TC) indicated that OC children tended to score higher on measures of audiovisual gain, spoken word recognition, and speech intelligibility. Conclusions The relationships

  4. An analysis of factors influencing complex water maze learning in rats: effects of task complexity, path order and escape assistance on performance following prenatal exposure to phenytoin.

    PubMed

    Vorhees, C V; Weisenburger, W P; Acuff-Smith, K D; Minck, D R

    1991-01-01

    Three hypotheses on factors determining performance in a complex water maze were tested in rats prenatally exposed to phenytoin. The hypotheses were: 1) that increasing maze complexity would better differentiate experimental effects; in particular, that an expanded version of a maze originally described by Biel would better differentiate groups than Biel's original design; 2) that path order is an important factor determining performance; specifically, that path sequence AB would better differentiate experiments from controls than the opposite order (sequence BA); and 3) that repeated trial failures interfere with learning, a problem putatively prevented by employing assisted (i.e., guided) escape. The specific prediction was that rats tested with assisted escape would learn faster and produce better group differentiation than rats tested with unassisted escape. Pregnant female Sprague-Dawley CD rats were gavaged on days 7-18 of gestation with propylene glycol alone (Control) or containing 100 or 200 mg/kg of phenytoin. Straight channel swimming trials followed by maze trials were begun on separate male/female offspring pairs from each litter on postnatal days 50, 70, or 90. The results confirmed hypothesis 1, i.e., the more complex maze better differentiated phenytoin-related group differences. This was true regardless of whether the phenytoin rats exhibiting circling were included in the analyses or not. The results disconfirmed hypothesis 2, i.e., that path order AB would better differentiate the groups than path order BA. Rather, the data supported the alternate hypothesis, that path order was not a significant determinant of prenatal drug-related maze deficits. This was unchanged regardless of whether phenytoin offspring exhibiting circling were or were not included in the analyses. The implication is that path B alone was sufficient to detect phenytoin's effects on maze performance. Finally, the overall results disconfirmed hypothesis 3, i.e., assisted escape

  5. Audio-Visual Aid in Teaching "Fatty Liver"

    ERIC Educational Resources Information Center

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-01-01

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…

  6. AUDIOVISUAL LEADERSHIP CONFERENCE (9TH, LAKE OKOBOJI, 1963) SUMMARY REPORT.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    A REPORT OF A CONFERENCE HELD TO ANALYZE LEARNING THEORY AS IT RELATES TO NEW MEDIA AND THE LEARNER IS PRESENTED. THE INVITATIONAL CONFERENCE WAS ATTENDED BY REPRESENTATIVES FROM PUBLIC SCHOOLS, UNIVERSITIES, STATE DEPARTMENTS OF EDUCATION, EDUCATIONAL AGENCIES, AND INDUSTRY. THE KNOWLEDGE EXPLOSION COUPLED WITH THE CURRENT REVOLUTION IN…

  7. The temporal binding window for audiovisual speech: Children are like little adults.

    PubMed

    Hillock-Dunn, Andrea; Grantham, D Wesley; Wallace, Mark T

    2016-07-29

    During a typical communication exchange, both auditory and visual cues contribute to speech comprehension. The influence of vision on speech perception can be measured behaviorally using a task where incongruent auditory and visual speech stimuli are paired to induce perception of a novel token reflective of multisensory integration (i.e., the McGurk effect). This effect is temporally constrained in adults, with illusion perception decreasing as the temporal offset between the auditory and visual stimuli increases. Here, we used the McGurk effect to investigate the development of the temporal characteristics of audiovisual speech binding in 7-24 year-olds. Surprisingly, results indicated that although older participants perceived the McGurk illusion more frequently, no age-dependent change in the temporal boundaries of audiovisual speech binding was observed. PMID:26920938

  8. Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy

    PubMed Central

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2013-01-01

    This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding. PMID:23801980

  9. A comparison between audio and audiovisual distraction techniques in managing anxious pediatric dental patients.

    PubMed

    Prabhakar, A R; Marwah, N; Raju, O S

    2007-01-01

    Pain is not the sole reason for fear of dentistry. Anxiety or the fear of unknown during dental treatment is a major factor and it has been the major concern for dentists for a long time. Therefore, the main aim of this study was to evaluate and compare the two distraction techniques, viz, audio distraction and audiovisual distraction, in management of anxious pediatric dental patients. Sixty children aged between 4-8 years were divided into three groups. Each child had four dental visits--screening visit, prophylaxis visit, cavity preparation and restoration visit, and extraction visit. Child's anxiety level in each visit was assessed using a combination of four measures: Venham's picture test, Venham's rating of clinical anxiety, pulse rate, and oxygen saturation. The values obtained were tabulated and subjected to statistical analysis. It was concluded that audiovisual distraction technique was more effective in managing anxious pediatric dental patient as compared to audio distraction technique. PMID:18007104

  10. Audio-visual relaxation training for anxiety, sleep, and relaxation among Chinese adults with cardiac disease.

    PubMed

    Tsai, Sing-Ling

    2004-12-01

    The long-term effect of an audio-visual relaxation training (RT) treatment involving deep breathing, exercise, muscle relaxation, guided imagery, and meditation was compared with routine nursing care for reducing anxiety, improving sleep, and promoting relaxation in Chinese adults with cardiac disease. This research was a quasi-experimental, two-group, pretest-posttest study. A convenience sample of 100 cardiology patients (41 treatment, 59 control) admitted to one large medical center hospital in the Republic of China (ROC) was studied for 1 year. The hypothesized relationships were supported. RT significantly (p <.05) improved anxiety, sleep, and relaxation in the treatment group as compared to the control group. It appears audio-visual RT might be a beneficial adjunctive therapy for adult cardiac patients. However, considerable further work using stronger research designs is needed to determine the most appropriate instructional methods and the factors that contribute to long-term consistent practice of RT with Chinese populations. PMID:15514963

  11. Musicians have enhanced subcortical auditory and audiovisual processing of speech and music

    PubMed Central

    Musacchia, Gabriella; Sams, Mikko; Skoe, Erika; Kraus, Nina

    2007-01-01

    Musical training is known to modify cortical organization. Here, we show that such modifications extend to subcortical sensory structures and generalize to processing of speech. Musicians had earlier and larger brainstem responses than nonmusician controls to both speech and music stimuli presented in auditory and audiovisual conditions, evident as early as 10 ms after acoustic onset. Phase-locking to stimulus periodicity, which likely underlies perception of pitch, was enhanced in musicians and strongly correlated with length of musical practice. In addition, viewing videos of speech (lip-reading) and music (instrument being played) enhanced temporal and frequency encoding in the auditory brainstem, particularly in musicians. These findings demonstrate practice-related changes in the early sensory encoding of auditory and audiovisual information. PMID:17898180

  12. European Union RACE program contributions to digital audiovisual communications and services

    NASA Astrophysics Data System (ADS)

    de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric

    1995-02-01

    The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.

  13. Development of an audiovisual speech perception app for children with autism spectrum disorders

    PubMed Central

    IRWIN, JULIA; PRESTON, JONATHAN; BRANCAZIO, LAWRENCE; D’ANGELO, MICHAEL; TURCIOS, JACQUELINE

    2015-01-01

    Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8–10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task. PMID:25313714

  14. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults

    PubMed Central

    Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when

  15. Anticipating action effects recruits audiovisual movement representations in the ventral premotor cortex.

    PubMed

    Bischoff, Matthias; Zentgraf, Karen; Pilgramm, Sebastian; Stark, Rudolf; Krüger, Britta; Munzert, Jörn

    2014-10-29

    When table tennis players anticipate the course of the ball while preparing their motor responses, they not only observe their opponents striking the ball but also listen to events such as the sound of racket-ball contact. Because visual stimuli can be detected more easily when accompanied by a sound, we assumed that complementary sensory audiovisual information would influence the anticipation of biological motion, especially when the racket-ball contact is not presented visually, but has to be inferred from continuous movement kinematics and an abrupt sound. Twenty-six observers were examined with fMRI while watching point-light displays (PLDs) of an opposing table tennis player. Their task was to anticipate the resultant ball flight. The sound was presented complementary to the veracious event or at a deviant time point in its kinematics. Results showed that participants performed best in the complementary condition. Using a region-of-interest approach, fMRI data showed that complementary audiovisual stimulation elicited higher activation in the left temporo-occipital middle temporal gyrus (MTGto), the left primary motor cortex, and the right anterior intraparietal sulcus (aIPS). Both hemispheres also revealed higher activation in the ventral premotor cortex (vPMC) and the pars opercularis of the inferior frontal gyrus (BA 44). Ranking the behavioral effect of complementary versus conflicting audiovisual information over participants revealed an association between the complementary information and higher activation in the right vPMC. We conclude that the recruitment of movement representations in the auditory and visual modalities in the vPMC can be influenced by task-relevant cross-modal audiovisual interaction. PMID:25463138

  16. Top-down attention regulates the neural expression of audiovisual integration.

    PubMed

    Morís Fernández, Luis; Visser, Maya; Ventura-Campos, Noelia; Ávila, César; Soto-Faraco, Salvador

    2015-10-01

    The interplay between attention and multisensory integration has proven to be a difficult question to tackle. There are almost as many studies showing that multisensory integration occurs independently from the focus of attention as studies implying that attention has a profound effect on integration. Addressing the neural expression of multisensory integration for attended vs. unattended stimuli can help disentangle this apparent contradiction. In the present study, we examine if selective attention to sound pitch influences the expression of audiovisual integration in both behavior and neural activity. Participants were asked to attend to one of two auditory speech streams while watching a pair of talking lips that could be congruent or incongruent with the attended speech stream. We measured behavioral and neural responses (fMRI) to multisensory stimuli under attended and unattended conditions while physical stimulation was kept constant. Our results indicate that participants recognized words more accurately from an auditory stream that was both attended and audiovisually (AV) congruent, thus reflecting a benefit due to AV integration. On the other hand, no enhancement was found for AV congruency when it was unattended. Furthermore, the fMRI results indicated that activity in the superior temporal sulcus (an area known to be related to multisensory integration) was contingent on attention as well as on audiovisual congruency. This attentional modulation extended beyond heteromodal areas to affect processing in areas classically recognized as unisensory, such as the superior temporal gyrus or the extrastriate cortex, and to non-sensory areas such as the motor cortex. Interestingly, attention to audiovisual incongruence triggered responses in brain areas related to conflict processing (i.e., the anterior cingulate cortex and the anterior insula). Based on these results, we hypothesize that AV speech integration can take place automatically only when both

  17. SU-E-J-235: Audiovisual Biofeedback Improves the Correlation Between Internal and External Respiratory Motion

    SciTech Connect

    Lee, D; Pollock, S; Keall, P; Greer, P; Ludbrook, J; Paganelli, C; Kim, T

    2015-06-15

    Purpose: External respiratory surrogates are often used to predict internal lung tumor motion for beam gating but the assumption of correlation between external and internal surrogates is not always verified resulting in amplitude mismatch and time shift. To test the hypothesis that audiovisual (AV) biofeedback improves the correlation between internal and external respiratory motion, in order to improve the accuracy of respiratory-gated treatments for lung cancer radiotherapy. Methods: In nine lung cancer patients, 2D coronal and sagittal cine-MR images were acquired across two MRI sessions (pre- and mid-treatment) with (1) free breathing (FB) and (2) AV biofeedback. External anterior-posterior (AP) respiratory motions of (a) chest and (b) abdomen were simultaneously acquired with physiological measurement unit (PMU, 3T Skyra, Siemens Healthcare Erlangen, Germany) and real-time position management (RPM) system (Varian, Palo Alto, USA), respectively. Internal superior-inferior (SI) respiratory motions of (c) lung tumor (i.e. centroid of auto-segmented lung tumor) and (d) diaphragm (i.e. upper liver dome) were measured from individual cine-MR images across 32 dataset. The four respiratory motions were then synchronized with the cine-MR image acquisition time. Correlation coefficients were calculated in the time variation of two nominated respiratory motions: (1) chest-abdomen, (2) abdomen-diaphragm and (3) diaphragm-lung tumor. The three combinations were compared between FB and AV biofeedback. Results: Compared to FB, AV biofeedback improved chest-abdomen correlation by 17% (p=0.005) from 0.75±0.23 to 0.90±0.05 and abdomen-diaphragm correlation by 4% (p=0.058) from 0.91±0.11 to 0.95±0.05. Compared to FB, AV biofeedback improved diaphragm-lung tumor correlation by 12% (p=0.023) from 0.65±0.21 to 0.74±0.16. Conclusions: Our results demonstrated that AV biofeedback significantly improved the correlation of internal and external respiratory motion, thus

  18. Plasma membrane ordering agent pluronic F-68 (PF-68) reduces neurotransmitter uptake and release and produces learning and memory deficits in rats

    NASA Technical Reports Server (NTRS)

    Clarke, M. S.; Prendergast, M. A.; Terry, A. V. Jr

    1999-01-01

    A substantial body of evidence indicates that aged-related changes in the fluidity and lipid composition of the plasma membrane contribute to cellular dysfunction in humans and other mammalian species. In the CNS, reductions in neuronal plasma membrane order (PMO) (i.e., increased plasma membrane fluidity) have been attributed to age as well as the presence of the beta-amyloid peptide-25-35, known to play an important role in the neuropathology of Alzheimer's disease (AD). These PMO increases may influence neurotransmitter synthesis, receptor binding, and second messenger systems as well as signal transduction pathways. The effects of neuronal PMO on learning and memory processes have not been adequately investigated, however. Based on the hypothesis that an increase in PMO may alter a number of aspects of synaptic transmission, we investigated several neurochemical and behavioral effects of the membrane ordering agent, PF-68. In cell culture, PF-68 (nmoles/mg SDS extractable protein) reduced [3H]norepinephrine (NE) uptake into differentiated PC-12 cells as well as reduced nicotine stimulated [3H]NE release. The compound (800-2400 microg/kg, i.p., resulting in nmoles/mg SDS extractable protein in the brain) decreased step-through latencies and increased the frequencies of crossing into the unsafe side of the chamber in inhibitory avoidance training. In the Morris water maze, PF-68 increased the latencies and swim distances required to locate a hidden platform and reduced the time spent and distance swam in the previous target quadrant during transfer (probe) trials. PF-68 did not impair performance of a well-learned working memory task, the rat delayed stimulus discrimination task (DSDT), however. Studies with 14C-labeled PF-68 indicated that significant (pmoles/mg wet tissue) levels of the compound entered the brain from peripheral (i.p.) injection. No PF-68 related changes were observed in swim speeds or in visual acuity tests in water maze experiments, rotorod

  19. Plasma Membrane Ordering Agent Pluronic F-68 (PF-68) Reduces Neurotransmitter Uptake and Release and Produces Learning and Memory Deficits in Rats

    PubMed Central

    Clarke, Mark S.F.; Prendergast, Mark A.; Terry, Alvin V.

    1999-01-01

    A substantial body of evidence indicates that aged-related changes in the fluidity and lipid composition of the plasma membrane contribute to cellular dysfunction in humans and other mammalian species. In the CNS, reductions in neuronal plasma membrane order (PMO) (i.e., increased plasma membrane fluidity) have been attributed to age as well as the presence of the β-amyloid peptide-25-35, known to play an important role in the neuropathology of Alzheimer's disease (AD). These PMO increases may influence neurotransmitter synthesis, receptor binding, and second messenger systems as well as signal transduction pathways. The effects of neuronal PMO on learning and memory processes have not been adequately investigated, however. Based on the hypothesis that an increase in PMO may alter a number of aspects of synaptic transmission, we investigated several neurochemical and behavioral effects of the membrane ordering agent, PF-68. In cell culture, PF-68 (nmoles/mg SDS extractable protein) reduced [3H]norepinephrine (NE) uptake into differentiated PC-12 cells as well as reduced nicotine stimulated [3H]NE release. The compound (800–2400 μg/kg, i.p., resulting in nmoles/mg SDS extractable protein in the brain) decreased step-through latencies and increased the frequencies of crossing into the unsafe side of the chamber in inhibitory avoidance training. In the Morris water maze, PF-68 increased the latencies and swim distances required to locate a hidden platform and reduced the time spent and distance swam in the previous target quadrant during transfer (probe) trials. PF-68 did not impair performance of a well-learned working memory task, the rat delayed stimulus discrimination task (DSDT), however. Studies with 14C-labeled PF-68 indicated that significant (pmoles/mg wet tissue) levels of the compound entered the brain from peripheral (i.p.) injection. No PF-68 related changes were observed in swim speeds or in visual acuity tests in water maze experiments, rotorod

  20. Brain mechanisms that underlie the effects of motivational audiovisual stimuli on psychophysiological responses during exercise.

    PubMed

    Bigliassi, Marcelo; Silva, Vinícius B; Karageorghis, Costas I; Bird, Jonathan M; Santos, Priscila C; Altimari, Leandro R

    2016-05-01

    Motivational audiovisual stimuli such as music and video have been widely used in the realm of exercise and sport as a means by which to increase situational motivation and enhance performance. The present study addressed the mechanisms that underlie the effects of motivational stimuli on psychophysiological responses and exercise performance. Twenty-two participants completed fatiguing isometric handgrip-squeezing tasks under two experimental conditions (motivational audiovisual condition and neutral audiovisual condition) and a control condition. Electrical activity in the brain and working muscles was analyzed by use of electroencephalography and electromyography, respectively. Participants were asked to squeeze the dynamometer maximally for 30s. A single-item motivation scale was administered after each squeeze. Results indicated that task performance and situational motivational were superior under the influence of motivational stimuli when compared to the other two conditions (~20% and ~25%, respectively). The motivational stimulus downregulated the predominance of low-frequency waves (theta) in the right frontal regions of the cortex (F8), and upregulated high-frequency waves (beta) in the central areas (C3 and C4). It is suggested that motivational sensory cues serve to readjust electrical activity in the brain; a mechanism by which the detrimental effects of fatigue on the efferent control of working muscles is ameliorated. PMID:26948160