Science.gov

Sample records for order audiovisual learning

  1. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  2. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  3. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  4. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    PubMed Central

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning. PMID:26778999

  5. The Role of Audiovisual Mass Media News in Language Learning

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  6. Bayesian Calibration of Simultaneity in Audiovisual Temporal Order Judgments

    PubMed Central

    Yamamoto, Shinya; Miyazaki, Makoto; Iwano, Takayuki; Kitazawa, Shigeru

    2012-01-01

    After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation). In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to Bayesian integration theory (Bayesian calibration). We further showed, in theory, that the effect of Bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that Bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone) in a different block, the point of simultaneity shifted to “sound-first” for the pitch associated with sound-first stimuli, and to “light-first” for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to “light-first” for the pitch associated with sound-first stimuli, and to “sound-first” for the pitch associated with light-first stimuli. The results clearly show that Bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli. PMID:22792297

  7. Audiovisual Resources.

    ERIC Educational Resources Information Center

    Beasley, Augie E.; And Others

    1986-01-01

    Six articles on the use of audiovisual materials in the school library media center cover how to develop an audiovisual production center; audiovisual forms; a checklist for effective video/16mm use in the classroom; slides in learning; hazards of videotaping in the library; and putting audiovisuals on the shelf. (EJS)

  8. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    ERIC Educational Resources Information Center

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  9. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    ERIC Educational Resources Information Center

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  10. Effect on Intended and Incidental Learning from the Use of Learning Objectives with an Audiovisual Presentation.

    ERIC Educational Resources Information Center

    Main, Robert

    This paper reports a controlled field experiment conducted to determine the effects and interaction of five independent variables with an audiovisual slide-tape program: presence of learning objectives, location of learning objectives, type of knowledge, sex of learner, and retention of learning. Participants were university students in a general…

  11. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. PMID:27131076

  12. Something for Everyone? An Evaluation of the Use of Audio-Visual Resources in Geographical Learning in the UK.

    ERIC Educational Resources Information Center

    McKendrick, John H.; Bowden, Annabel

    1999-01-01

    Reports from a survey of geographers that canvassed experiences using audio-visual resources to support teaching. Suggests that geographical learning has embraced audio-visual resources and that they are employed effectively. Concludes that integration of audio-visual resources into mainstream curriculum is essential to ensure effective and…

  13. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    ERIC Educational Resources Information Center

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  14. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    ERIC Educational Resources Information Center

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  15. Audiovisuals and non-print learning resources in a health sciences library.

    PubMed

    Robinow, B H

    1979-03-01

    The MD undergraduate program at McMaster University, based entirely on self-instruction, requires the provision of all kinds of learning resources. How these are assembled and made available is described. Emphasis is placed on the practical library problems of cataloging, shelving, maintenance, and distribution of audiovisual materials including pathology specimens and 'problem boxes' as well as the more usual films, videotapes and slide/tape sets. Evaluation is discussed briefly. PMID:85624

  16. Influence of Audio-Visual Presentations on Learning Abstract Concepts.

    ERIC Educational Resources Information Center

    Lai, Shu-Ling

    2000-01-01

    Describes a study of college students that investigated whether various types of visual illustrations influenced abstract concept learning when combined with audio instruction. Discusses results of analysis of variance and pretest posttest scores in relation to learning performance, attitudes toward the computer-based program, and differences in…

  17. Isotropic sequence order learning.

    PubMed

    Porr, Bernd; Wörgötter, Florentin

    2003-04-01

    In this article, we present an isotropic unsupervised algorithm for temporal sequence learning. No special reward signal is used such that all inputs are completely isotropic. All input signals are bandpass filtered before converging onto a linear output neuron. All synaptic weights change according to the correlation of bandpass-filtered inputs with the derivative of the output. We investigate the algorithm in an open- and a closed-loop condition, the latter being defined by embedding the learning system into a behavioral feedback loop. In the open-loop condition, we find that the linear structure of the algorithm allows analytically calculating the shape of the weight change, which is strictly heterosynaptic and follows the shape of the weight change curves found in spike-time-dependent plasticity. Furthermore, we show that synaptic weights stabilize automatically when no more temporal differences exist between the inputs without additional normalizing measures. In the second part of this study, the algorithm is is placed in an environment that leads to closed sensor-motor loop. To this end, a robot is programmed with a prewired retraction reflex reaction in response to collisions. Through isotropic sequence order (ISO) learning, the robot achieves collision avoidance by learning the correlation between his early range-finder signals and the later occurring collision signal. Synaptic weights stabilize at the end of learning as theoretically predicted. Finally, we discuss the relation of ISO learning with other drive reinforcement models and with the commonly used temporal difference learning algorithm. This study is followed up by a mathematical analysis of the closed-loop situation in the companion article in this issue, "ISO Learning Approximates a Solution to the Inverse-Controller Problem in an Unsupervised Behavioral Paradigm" (pp. 865-884). PMID:12689389

  18. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    PubMed

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes. PMID:23893940

  19. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  20. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    PubMed

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience. PMID:26834129

  1. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    PubMed

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training. PMID:26671710

  2. Problem Order Implications for Learning

    ERIC Educational Resources Information Center

    Li, Nan; Cohen, William W.; Koedinger, Kenneth R.

    2013-01-01

    The order of problems presented to students is an important variable that affects learning effectiveness. Previous studies have shown that solving problems in a blocked order, in which all problems of one type are completed before the student is switched to the next problem type, results in less effective performance than does solving the problems…

  3. Arousal and Reminiscence in Learning From Color and Black/White Audio-Visual Presentations.

    ERIC Educational Resources Information Center

    Farley, Frank H.; Grant, Alfred D.

    Reminiscence, or an increase in retention scores from a short-to-long-term retention test, has been shown in some previous work to be a significant function of arousal. Previous studies of the effects of color versus black-and-white audiovisual presentations have generally used film or television and have found no facilitating effect of color on…

  4. Audiovisual Script Writing.

    ERIC Educational Resources Information Center

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  5. Lecture Hall and Learning Design: A Survey of Variables, Parameters, Criteria and Interrelationships for Audio-Visual Presentation Systems and Audience Reception.

    ERIC Educational Resources Information Center

    Justin, J. Karl

    Variables and parameters affecting architectural planning and audiovisual systems selection for lecture halls and other learning spaces are surveyed. Interrelationships of factors are discussed, including--(1) design requirements for modern educational techniques as differentiated from cinema, theater or auditorium design, (2) general hall…

  6. Adult Learning Strategies and Approaches (ALSA). Resources for Teachers of Adults. A Handbook of Practical Advice on Audio-Visual Aids and Educational Technology for Tutors and Organisers.

    ERIC Educational Resources Information Center

    Cummins, John; And Others

    This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…

  7. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  8. Learning one-to-many mapping functions for audio-visual integrated perception

    NASA Astrophysics Data System (ADS)

    Lim, Jung-Hui; Oh, Do-Kwan; Lee, Soo-Young

    2010-04-01

    In noisy environment the human speech perception utilizes visual lip-reading as well as audio phonetic classification. This audio-visual integration may be done by combining the two sensory features at the early stage. Also, the top-down attention may integrate the two modalities. For the sensory feature fusion we introduce mapping functions between the audio and visual manifolds. Especially, we present an algorithm to provide one-to-many mapping function for the videoto- audio mapping. The top-down attention is also presented to integrate both the sensory features and classification results of both modalities, which is able to explain McGurk effect. Each classifier is separately implemented by the Hidden-Markov Model (HMM), but the two classifiers are combined at the top level and interact by the top-down attention.

  9. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  10. Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds.

    PubMed

    Jesse, Alexandra; Johnson, Elizabeth K

    2016-05-01

    Analyses of caregiver-child communication suggest that an adult tends to highlight objects in a child's visual scene by moving them in a manner that is temporally aligned with the adult's speech productions. Here, we used the looking-while-listening paradigm to examine whether 25-month-olds use audiovisual temporal alignment to disambiguate and learn novel word-referent mappings in a difficult word-learning task. Videos of two equally interesting and animated novel objects were simultaneously presented to children, but the movement of only one of the objects was aligned with an accompanying object-labeling audio track. No social cues (e.g., pointing, eye gaze, touch) were available to the children because the speaker was edited out of the videos. Immediately afterward, toddlers were presented with still images of the two objects and asked to look at one or the other. Toddlers looked reliably longer to the labeled object, demonstrating their acquisition of the novel word-referent mapping. A control condition showed that children's performance was not solely due to the single unambiguous labeling that had occurred at experiment onset. We conclude that the temporal link between a speaker's utterances and the motion they imposed on the referent object helps toddlers to deduce a speaker's intended reference in a difficult word-learning scenario. In combination with our previous work, these findings suggest that intersensory redundancy is a source of information used by language users of all ages. That is, intersensory redundancy is not just a word-learning tool used by young infants. PMID:26765249

  11. Enhanced Multisensory Integration and Motor Reactivation after Active Motor Learning of Audiovisual Associations

    ERIC Educational Resources Information Center

    Butler, Andrew J.; James, Thomas W.; James, Karin Harman

    2011-01-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…

  12. Audiovisual Interaction

    NASA Astrophysics Data System (ADS)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  13. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  14. Learning with Hyperlinked Videos--Design Criteria and Efficient Strategies for Using Audiovisual Hypermedia

    ERIC Educational Resources Information Center

    Zahn, Carmen; Barquero, Beatriz; Schwan, Stephan

    2004-01-01

    In this article, we discuss the results of an experiment in which we studied two apparently conflicting classes of design principles for instructional hypervideos: (1) those principles derived from work on multimedia learning that emphasize spatio-temporal contiguity and (2) those originating from work on hypermedia learning that favour…

  15. Effects of Audiovisual Stimuli on Learning through Microcomputer-Based Class Presentation.

    ERIC Educational Resources Information Center

    Hativa, Nira; Reingold, Aliza

    1987-01-01

    Effectiveness of two versions of computer software used as an electronic blackboard to present geometric concepts to ninth grade students was compared. The experimental version incorporated color, animation, and nonverbal sounds as stimuli; the no-stimulus version was monochrome. Both immediate and delayed learning were significantly better for…

  16. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. PMID:27003546

  17. Virtual Attendance: Analysis of an Audiovisual over IP System for Distance Learning in the Spanish Open University (UNED)

    ERIC Educational Resources Information Center

    Vazquez-Cano, Esteban; Fombona, Javier; Fernandez, Alberto

    2013-01-01

    This article analyzes a system of virtual attendance, called "AVIP" (AudioVisual over Internet Protocol), at the Spanish Open University (UNED) in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the…

  18. Manifold Learning by Preserving Distance Orders

    PubMed Central

    Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-01-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis. PMID:25045195

  19. The effect of task order predictability in audio-visual dual task performance: Just a central capacity limitation?

    PubMed Central

    Töllner, Thomas; Strobach, Tilo; Schubert, Torsten; Müller, Hermann J.

    2012-01-01

    In classic Psychological-Refractory-Period (PRP) dual-task paradigms, decreasing stimulus onset asynchronies (SOA) between the two tasks typically lead to increasing reaction times (RT) to the second task and, when task order is non-predictable, to prolonged RTs to the first task. Traditionally, both RT effects have been advocated to originate exclusively from the dynamics of a central bottleneck. By focusing on two specific electroencephalographic brain responses directly linkable to perceptual or motor processing stages, respectively, the present study aimed to provide a more detailed picture as to the origin(s) of these behavioral PRP effects. In particular, we employed 2-alternative forced-choice (2AFC) tasks requiring participants to identify the pitch of a tone (high versus low) in the auditory, and the orientation of a target object (vertical versus horizontal) in the visual, task, with task order being either predictable or non-predictable. Our findings show that task order predictability (TOP) and inter-task SOA interactively determine the speed of (visual) perceptual processes (as indexed by the PCN timing) for both the first and the second task. By contrast, motor response execution times (as indexed by the LRP timing) are influenced independently by TOP for the first, and SOA for the second, task. Overall, this set of findings complements classical as well as advanced versions of the central bottleneck model by providing electrophysiological evidence for modulations of both perceptual and motor processing dynamics that, in summation with central capacity limitations, give rise to the behavioral PRP outcome. PMID:22973208

  20. Evaluating audio-visual and computer programs for classroom use.

    PubMed

    Van Ort, S

    1989-01-01

    Appropriate faculty decisions regarding adoption of audiovisual and computer programs are critical to the classroom use of these learning materials. The author describes the decision-making process in one college of nursing and the adaptation of an evaluation tool for use by faculty in reviewing audiovisual and computer programs. PMID:2467237

  1. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  2. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  3. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  4. Audiovisual Mass Media and Education. TTW 27/28.

    ERIC Educational Resources Information Center

    van Stapele, Peter, Ed.; Sutton, Clifford C., Ed.

    1989-01-01

    The 15 articles in this special issue focus on learning about the audiovisual mass media and education, especially television and film, in relation to various pedagogical and didactical questions. Individual articles are: (1) "Audiovisual Mass Media for Education in Pakistan: Problems and Prospects" (Ahmed Noor Kahn); (2) "The Role of the…

  5. THE COST OF AUDIOVISUAL INSTRUCTION.

    ERIC Educational Resources Information Center

    1964

    A REPORT OF A SURVEY ON THE COST OF AUDIOVISUAL INSTRUCTION IN THE NATION'S PUBLIC ELEMENTARY AND SECONDARY SCHOOLS DURING 1962-63 AND 1963-64 WAS PRESENTED. INCLUDED WERE THE TOTAL EXPENDITURES FOR AUDIOVISUAL INSTRUCTION AND SPECIFIC EXPENDITURES FOR AUDIOVISUAL SALARIES, AUDIOVISUAL EQUIPMENT, AND FILM RENTALS. MEDIANS WERE COMPUTED FOR (1) THE…

  6. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    ERIC Educational Resources Information Center

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  7. Audio/Visual Ratios in Commercial Filmstrips.

    ERIC Educational Resources Information Center

    Gulliford, Nancy L.

    Developed by the Westinghouse Electric Corporation, Video Audio Compressed (VIDAC) is a compressed time, variable rate, still picture television system. This technology made it possible for a centralized library of audiovisual materials to be transmitted over a television channel in very short periods of time. In order to establish specifications…

  8. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans. PMID:27010716

  9. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  10. AUDIOVISUAL SERVICES CATALOG.

    ERIC Educational Resources Information Center

    Stockton Unified School District, CA.

    A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…

  11. Audiovisual Techniques Handbook.

    ERIC Educational Resources Information Center

    Hess, Darrel

    This handbook focuses on the use of 35mm slides for audiovisual presentations, particularly as an alternative to the more expensive and harder to produce medium of video. Its point of reference is creating slide shows about experiences in the Peace Corps; however, recommendations offered about both basic production procedures and enhancements are…

  12. Audiovisual Materials in Mathematics.

    ERIC Educational Resources Information Center

    Raab, Joseph A.

    This pamphlet lists five thousand current, readily available audiovisual materials in mathematics. These are grouped under eighteen subject areas: Advanced Calculus, Algebra, Arithmetic, Business, Calculus, Charts, Computers, Geometry, Limits, Logarithms, Logic, Number Theory, Probability, Soild Geometry, Slide Rule, Statistics, Topology, and…

  13. Selected Mental Health Audiovisuals.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  14. Promoting Higher Order Thinking Skills Using Inquiry-Based Learning

    ERIC Educational Resources Information Center

    Madhuri, G. V.; Kantamreddi, V. S. S. N; Prakash Goteti, L. N. S.

    2012-01-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in…

  15. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  16. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  17. Learned audio-visual cross-modal associations in observed piano playing activate the left planum temporale. An fMRI study.

    PubMed

    Hasegawa, Takehiro; Matsuki, Ken-Ichi; Ueno, Takashi; Maeda, Yasuhiro; Matsue, Yoshihiko; Konishi, Yukuo; Sadato, Norihiro

    2004-08-01

    Lip reading is known to activate the planum temporale (PT), a brain region which may integrate visual and auditory information. To find out whether other types of learned audio-visual integration occur in the PT, we investigated "key-touch reading" using functional magnetic resonance imaging (fMRI). As well-trained pianists are able to identify pieces of music by watching the key-touching movements of the hands, we hypothesised that the visual information of observed sequential finger movements is transformed into the auditory modality during "key-touch reading" as is the case during lip reading. We therefore predicted activation of the PT during key-touch reading. Twenty-six healthy right-handed volunteers were recruited for fMRI. Of these, 7 subjects had never experienced piano training (naïve group), 10 had a little experience of piano playing (less trained group), and the remaining 9 had been trained for more than 8 years (well trained group). During task periods, subjects were required to view the bimanual hand movements of a piano player making key presses. During control periods, subjects viewed the same hands sliding from side to side without tapping movements of the fingers. No sound was provided. Sequences of key presses during task periods consisted of pieces of familiar music, unfamiliar music, or random sequences. Well-trained subjects were able to identify the familiar music, whereas less-trained subjects were not. The left PT of the well-trained subjects was equally activated by observation of familiar music, unfamiliar music, and random sequences. The naïve and less trained groups did not show activation of the left PT during any of the tasks. These results suggest that PT activation reflects a learned process. As the activation was elicited by viewing key pressing actions regardless of whether they constituted a piece of music, the PT may be involved in processes that occur prior to the identification of a piece of music, that is, mapping the

  18. Time and Order Effects on Causal Learning

    ERIC Educational Resources Information Center

    Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.

    2006-01-01

    Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…

  19. Variable Affix Order: Grammar and Learning

    ERIC Educational Resources Information Center

    Ryan, Kevin M.

    2010-01-01

    While affix ordering often reflects general syntactic or semantic principles, it can also be arbitrary or variable. This article develops a theory of morpheme ordering based on local morphotactic restrictions encoded as weighted bigram constraints. I examine the formal properties of morphotactic systems, including arbitrariness, nontransitivity,…

  20. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    ERIC Educational Resources Information Center

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  1. Evaluating an Experimental Audio-Visual Module Programmed to Teach a Basic Anatomical and Physiological System.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    The learning efficiency and effectiveness of teaching an anatomical and physiological system to Air Force enlisted trainees utilizing an experimental audiovisual programed module was compared to that of a commercial linear programed text. It was demonstrated that the audiovisual programed approach to training was more efficient than and equally as…

  2. The World of Audiovisual Education: Its Impact on Libraries and Librarians.

    ERIC Educational Resources Information Center

    Ely, Donald P.

    As the field of educational technology developed, the field of library science became increasingly concerned about audiovisual media. School libraries have made significant developments in integrating audiovisual media into traditional programs, and are becoming learning resource centers with a variety of media; academic and public libraries are…

  3. Rapid temporal recalibration is unique to audiovisual stimuli.

    PubMed

    Van der Burg, Erik; Orchard-Mills, Emily; Alais, David

    2015-01-01

    Following prolonged exposure to asynchronous multisensory signals, the brain adapts to reduce the perceived asynchrony. Here, in three separate experiments, participants performed a synchrony judgment task on audiovisual, audiotactile or visuotactile stimuli and we used inter-trial analyses to examine whether temporal recalibration occurs rapidly on the basis of a single asynchronous trial. Even though all combinations used the same subjects, task and design, temporal recalibration occurred for audiovisual stimuli (i.e., the point of subjective simultaneity depended on the preceding trial's modality order), but none occurred when the same auditory or visual event was combined with a tactile event. Contrary to findings from prolonged adaptation studies showing recalibration for all three combinations, we show that rapid, inter-trial recalibration is unique to audiovisual stimuli. We conclude that recalibration occurs at two different timescales for audiovisual stimuli (fast and slow), but only on a slow timescale for audiotactile and visuotactile stimuli. PMID:25200176

  4. [Cultural heritage and audiovisual creation in the Arab world].

    PubMed

    Aziza, M

    1979-01-01

    Audiovisual creation is facing in Arab countries problems arising from the use of imported techniques in order to reconstitute or transform their own reality. Arab audiovisual producers see this technique as an easy and efficient way to reproduce reality or construct conventionally an artificial universe. Sometimes, audiovisuals have an absolute suggestion power; sometimes, these techniques are faced with total incredulity. From a diffusion point of view, audiovisuals in the Arab world have a very specific status. The effects of television, studied by western researchers in their cultural environment, are not reproduced in the same fashion in the Arab cultural world. In the Arab world, the word very often still competes successfully with the picture, even after the appearance and adoption of mass media. Finally, one must mention a very interesting situation resulting from a linguistic phenomenon which is specific to the Arab world: the existence of 2 communication languages, one noble but little used, the other dialectical but popular. In all Arab countries, the News, the most political program, is broadcasted in the classical language, despite the danger of meaning distortion in the least educated public. The reason is probably that the classical Arab language enjoys a sacred status. Arab audiovisuals are facing several obstacles to their total and autonomous realization. The contribution of the Arab audiovisual producers is relatively modest, compared to some other areas of cultural creation. Arab film-making is looking more and more for the cooperation of contemporary writers. Contemporary literature is a considerable source for the renewal of Arab audiovisual expression. A relationship between film and popular cultural heritage could be very usefully established in both directions. Audiovisuals should treat popular cultural manifestations as a global social fact on several significant levels. PMID:12261391

  5. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  6. School Building Design and Audio-Visual Resources.

    ERIC Educational Resources Information Center

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  7. Improving physician practice efficiency by learning lab test ordering pattern.

    PubMed

    Cai, Peng; Cao, Feng; Ni, Yuan; Shen, Weijia; Zheng, Tao

    2013-01-01

    The system of electronic medical records (EMR) has been widely used in physician practice. In China, physicians have the time pressure to provide care to many patients in a short period. Improving practice efficiency is a promising direction to mitigate this predicament. During the encounter, ordering lab test is one of the most frequent actions in EMR system. In this paper, our motivation is to save physician's time by providing lab test ordering list to facilitate physician practice. To this end, we developed weight based multi-label classification framework to learn to order lab test for the current encounter according to the historical EMR. Particularly, we propose to learn the physician-specific lab test ordering pattern as different physicians may have different practice behavior on the same population. Experimental results on the real data set demonstrate that physician-specific models can outperform the baseline. PMID:23920762

  8. Spatial orienting in complex audiovisual environments.

    PubMed

    Nardo, Davide; Santangelo, Valerio; Macaluso, Emiliano

    2014-04-01

    Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context. This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings. We created short videos portraying everyday life situations that included a lateralised visual event and a co-occurring sound, either on the same or on the opposite side of space. Subjects viewed the videos with or without eye-movements allowed (overt or covert orienting). For each video, visual and auditory saliency maps were used to index the strength of stimulus-driven signals, and eye-movements were used as a measure of the efficacy of the audiovisual events for spatial orienting. Results showed that visual salience modulated activity in higher-order visual areas, whereas auditory salience modulated activity in the superior temporal cortex. Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence). Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience. These patterns of activation were comparable in overt and covert orienting conditions. Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus-driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality. PMID:23616340

  9. Promoting higher order thinking skills using inquiry-based learning

    NASA Astrophysics Data System (ADS)

    Madhuri, G. V.; S. S. N Kantamreddi, V.; Goteti, L. N. S. Prakash

    2012-05-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in chemistry. Laboratory exercises are designed based on Bloom's taxonomy and a just-in-time facilitation approach is used. A pre-laboratory discussion outlining the theory of the experiment and its relevance is carried out to enable the students to analyse real-life problems. The performance of the students is assessed based on their ability to perform the experiment, design new experiments and correlate practical utility of the course module with real life. The novelty of the present approach lies in the fact that the learning outcomes of the existing experiments are achieved through establishing a relationship with real-world problems.

  10. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  11. Multiple-Try Feedback and Higher-Order Learning Outcomes

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Koul, Ravinder

    2005-01-01

    Although feedback is an important component of computer-based instruction (CBI), the effects of feedback on higher-order learning outcomes are not well understood. Several meta-analyses provide two rules of thumb: any feedback is better than no feedback and feedback with more information is better than feedback with less information. …

  12. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. PMID:25269620

  13. [Second-order retrospective revaluation in human contingency learning].

    PubMed

    Numata, Keitaro; Shimazaki, Tsuneo

    2009-04-01

    We demonstrated second-order retrospective revaluation with three cues (T1, T2, and C) and an outcome, in human contingency learning. Experimental task, PC-controlled video game in which participants were required to observe about the relations between firing missiles and the tank destruction, consisted of three training phases and two rating phases. Groups C+ and C- consisted of same first two training phases, CT+ (cues C and T with an outcome) and T1T2+ followed by C+, or C- training for Groups C+, C-, respectively. In rating phases, it is clearly demonstrated that the judgment of predictive value for the outcome of the T2 were higher by C+ training (second-order unovershadowing) and lowered by C- training (second-order backward blocking). The results for Groups RC+ and RC-, in which the orders of the first two training phase for Groups C+ and C- were interchanged, also showed second-order unovershadowing and second-order backward blocking. These results, the robustness of second-order retrospective revaluation against the order of the first training phases, can be explained by the extended comparator hypothesis and probabilistic contrast model. However, these results cannot be explained by traditional associative learning models. PMID:19489431

  14. Predicting perceptual learning from higher-order cortical processing.

    PubMed

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. PMID:26391126

  15. Machine learning using a higher order correlation network

    SciTech Connect

    Lee, Y.C.; Doolen, G.; Chen, H.H.; Sun, G.Z.; Maxwell, T.; Lee, H.Y.

    1986-01-01

    A high-order correlation tensor formalism for neural networks is described. The model can simulate auto associative, heteroassociative, as well as multiassociative memory. For the autoassociative model, simulation results show a drastic increase in the memory capacity and speed over that of the standard Hopfield-like correlation matrix methods. The possibility of using multiassociative memory for a learning universal inference network is also discussed. 9 refs., 5 figs.

  16. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  17. Towards Postmodernist Television: INA's Audiovisual Magazine Programmes.

    ERIC Educational Resources Information Center

    Boyd-Bowman, Susan

    Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines, "Hieroglyphes,"…

  18. Audio-Visual Aids: Historians in Blunderland.

    ERIC Educational Resources Information Center

    Decarie, Graeme

    1988-01-01

    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  19. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  20. Audio-Visual Materials Catalog.

    ERIC Educational Resources Information Center

    Anderson (M.D.) Hospital and Tumor Inst., Houston, TX.

    This catalog lists 27 audiovisual programs produced by the Department of Medical Communications of the University of Texas M. D. Anderson Hospital and Tumor Institute for public distribution. Video tapes, 16 mm. motion pictures and slide/audio series are presented dealing mostly with cancer and related subjects. The programs are intended for…

  1. Audio-Visual Teaching Machines.

    ERIC Educational Resources Information Center

    Dorsett, Loyd G.

    An audiovisual teaching machine (AVTM) presents programed audio and visual material simultaneously to a student and accepts his response. If his response is correct, the machine proceeds with the lesson; if it is incorrect, the machine so indicates and permits another choice (linear) or automatically presents supplementary material (branching).…

  2. Audio-Visual Resource Guide.

    ERIC Educational Resources Information Center

    Abrams, Nick, Ed.

    The National Council of Churches has assembled this extensive audiovisual guide for the benefit of schools, churches and community organizations. The guide is categorized into 14 distinct conceptual areas ranging from "God and the Church" to science, the arts, race relations, and national/international critical issues. Though assembled under the…

  3. Audiovisual Media for Computer Education.

    ERIC Educational Resources Information Center

    Van Der Aa, H. J., Ed.

    The result of an international survey, this catalog lists over 450 films dealing with computing methods and automation and is intended for those who wish to use audiovisual displays as a means of instruction of computer education. The catalog gives the film's title, running time, and producer and tells whether the film is color or black-and-white,…

  4. A Basic Reference Shelf on Audio-Visual Instruction. A Series One Paper from ERIC at Stanford.

    ERIC Educational Resources Information Center

    Dale, Edgar; Trzebiatowski, Gregory

    Topics in this annotated bibliography on audiovisual instruction include the history of instructional technology, teacher-training, equipment operation, administration of media programs, production of instructional materials, language laboratories, instructional television, programed instruction, communication theory, learning theory, and…

  5. Learn locally, think globally. Exemplar variability supports higher-order generalization and word learning.

    PubMed

    Perry, Lynn K; Samuelson, Larissa K; Malloy, Lisa M; Schiffer, Ryan N

    2010-12-01

    Research suggests that variability of exemplars supports successful object categorization; however, the scope of variability's support at the level of higher-order generalization remains unexplored. Using a longitudinal study, we examined the role of exemplar variability in first- and second-order generalization in the context of nominal-category learning at an early age. Sixteen 18-month-old children were taught 12 categories. Half of the children were taught with sets of highly similar exemplars; the other half were taught with sets of dissimilar, variable exemplars. Participants' learning and generalization of trained labels and their development of more general word-learning biases were tested. All children were found to have learned labels for trained exemplars, but children trained with variable exemplars generalized to novel exemplars of these categories, developed a discriminating word-learning bias generalizing labels of novel solid objects by shape and labels of nonsolid objects by material, and accelerated in vocabulary acquisition. These findings demonstrate that object variability leads to better abstraction of individual and global category organization, which increases learning outside the laboratory. PMID:21106892

  6. A Distance Learning Model for Teaching Higher Order Thinking

    ERIC Educational Resources Information Center

    Notar, Charles E.; Wilson, Janell D.; Montgomery, Mary K.

    2005-01-01

    A teaching model for distance learning (DL) requires a system (a technology) and process (a way of linking resources) that makes distance learning no different than learning in the traditional classroom. The process must support a design that provides for learning, ensures maximum transfer, and is student-centered. The process must provide a…

  7. Assessment of Cognitive Load in Multimedia Learning with Dual-Task Methodology: Auditory Load and Modality Effects

    ERIC Educational Resources Information Center

    Brunken, Roland; Plass, Jan L.; Leutner, Detlev

    2004-01-01

    Using cognitive load theory and cognitive theory of multimedia learning as a framework, we conducted two within-subject experiments with 10 participants each in order to investigate (1) if the audiovisual presentation of verbal and pictorial learning materials would lead to a higher demand on phonological cognitive capacities than the visual-only…

  8. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    ERIC Educational Resources Information Center

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  9. Order of Presentation Effects in Learning Color Categories

    ERIC Educational Resources Information Center

    Sandhofer, Catherine M.; Doumas, Leonidas A. A.

    2008-01-01

    Two studies, an experimental category learning task and a computational simulation, examined how sequencing training instances to maximize comparison and memory affects category learning. In Study 1, 2-year-old children learned color categories with three training conditions that varied in how categories were distributed throughout training and…

  10. Improved Computer-Aided Instruction by the Use of Interfaced Random-Access Audio-Visual Equipment. Report on Research Project No. P/24/1.

    ERIC Educational Resources Information Center

    Bryce, C. F. A.; Stewart, A. M.

    A brief review of the characteristics of computer assisted instruction and the attributes of audiovisual media introduces this report on a project designed to improve the effectiveness of computer assisted learning through the incorporation of audiovisual materials. A discussion of the implications of research findings on the design and layout of…

  11. Encouraging Higher-Order Thinking in General Chemistry by Scaffolding Student Learning Using Marzano's Taxonomy

    ERIC Educational Resources Information Center

    Toledo, Santiago; Dubas, Justin M.

    2016-01-01

    An emphasis on higher-order thinking within the curriculum has been a subject of interest in the chemical and STEM literature due to its ability to promote meaningful, transferable learning in students. The systematic use of learning taxonomies could be a practical way to scaffold student learning in order to achieve this goal. This work proposes…

  12. Solar Energy Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.

    This directory presents an annotated bibliography of non-print information resources dealing with solar energy. The document is divided by type of audio-visual medium, including: (1) Films, (2) Slides and Filmstrips, and (3) Videotapes. A fourth section provides addresses and telephone numbers of audiovisual aids sources, and lists the page…

  13. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. PMID:25863510

  14. Audio-Visual Aids in Universities

    ERIC Educational Resources Information Center

    Douglas, Jackie

    1970-01-01

    A report on the proceedings and ideas expressed at a one day seminar on "Audio-Visual Equipment--Its Uses and Applications for Teaching and Research in Universities." The seminar was organized by England's National Committee for Audio-Visual Aids in Education in conjunction with the British Universities Film Council. (LS)

  15. In Focus: Alcohol and Alcoholism Audiovisual Guide.

    ERIC Educational Resources Information Center

    National Clearinghouse for Alcohol Information (DHHS), Rockville, MD.

    This guide reviews audiovisual materials currently available on alcohol abuse and alcoholism. An alphabetical index of audiovisual materials is followed by synopses of the indexed materials. Information about the intended audience, price, rental fee, and distributor is included. This guide also provides a list of publications related to media…

  16. Catalog of Audiovisual Materials Related to Rehabilitation.

    ERIC Educational Resources Information Center

    Mann, Joe, Ed.; Henderson, Jim, Ed.

    An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…

  17. Learning in Order To Teach in Chicxulub Puerto, Yucatan, Mexico.

    ERIC Educational Resources Information Center

    Wilber, Cynthia J.

    2000-01-01

    Describes a community-based computer education program for the young people (and adults) of Chicxulub Puerto, a small fishing village in Yucatan, Mexico. Notes the children learn Maya, Spanish, and English in the context of learning computer and telecommunication skills. Concludes that access to the Internet has made a profound difference in a…

  18. Audio-visual gender recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  19. Multi-strategy learning of search control for partial-order planning

    SciTech Connect

    Estlin, T.A.; Mooney, R.J.

    1996-12-31

    Most research in planning and learning has involved linear, state-based planners. This paper presents SCOPE, a system for learning search-control rules that improve the performance of a partial-order planner. SCOPE integrates explanation-based and inductive learning techniques to acquire control rules for a partial-order planner. Learned rules are in the form of selection heuristics that help the planner choose between competing plan refinements. Specifically, SCOPE learns domain-specific control rules for a version of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains.

  20. The Order of Learning: Essays on the Contemporary University.

    ERIC Educational Resources Information Center

    Shils, Edward

    The 14 essays in this book, written from 1938 through 1995, examine the modern research university, focusing on the relationship of these institutions to government, academic freedom, and the responsibilities of the academic profession. The book contends that the university has been deflected from its essential commitment to teaching, learning,…

  1. Conceptual Similarity Promotes Generalization of Higher Order Fear Learning

    ERIC Educational Resources Information Center

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to…

  2. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    ERIC Educational Resources Information Center

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  3. Audiovisual Materials and Techniques for Teaching Foreign Languages: Recent Trends and Activities.

    ERIC Educational Resources Information Center

    Parks, Carolyn

    Recent experimentation with audio-visual (A-V) materials has provided insight into the language learning process. Researchers and teachers alike have recognized the importance of using A-V materials to achieve goals related to meaningful and relevant communication, retention and recall of language items, non-verbal aspects of communication, and…

  4. Audiovisual Resources for Teaching Instructional Technology; an Annotated List of Materials.

    ERIC Educational Resources Information Center

    Ely, Donald P., Ed.; Beilby, Albert, Ed.

    The audiovisual resources listed in this catalog cover 10 instructional-technology topics: administration; facilities; instructional design; learning and communication; media equipment; media production; media utilization; research; instructional techniques; and society, education, and technology. Any entry falling into more than one category is…

  5. Nutrition Education Materials and Audiovisuals for Grades 7 through 12. Special Reference Briefs Series.

    ERIC Educational Resources Information Center

    Evans, Shirley King, Comp.

    This annotated bibliography lists nutrition education materials, audiovisuals, and resources for classroom use. Items listed cover topics such as general nutrition, food preparation, food science, and dietary management. Each item is listed in one or more of the following categories: (1) curriculum/lesson plans; (2) learning activities; (3)…

  6. No rapid audiovisual recalibration in adults on the autism spectrum.

    PubMed

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  7. No rapid audiovisual recalibration in adults on the autism spectrum

    PubMed Central

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  8. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  9. A checklist for planning and designing audiovisual facilities in health sciences libraries.

    PubMed Central

    Holland, G J; Bischoff, F A; Foxman, D S

    1984-01-01

    Developed by an MLA/HeSCA (Health Sciences Communications Association) joint committee, this checklist is intended to serve as a conceptual framework for planning a new or renovated audiovisual facility in a health sciences library. Emphasis is placed on the philosophical and organizational decisions that must be made about an audiovisual facility before the technical or spatial decisions can be wisely made. Specific standards for facilities or equipment are not included. The first section focuses on health sciences library settings. Ideas presented in the remaining sections could apply to academic learning resource center environments as well. A bibliography relating to all aspects of audiovisual facilities planning and design is included with references to specific sections of the checklist. PMID:6208957

  10. U.S. Government Films, 1971 Supplement; A Catalog of Audiovisual Materials for Rent and Sale by the National Audiovisual Center.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    The first edition of the National Audiovisual Center sales catalog (LI 003875) is updated by this supplement. Changes in price and order number as well as deletions from the 1969 edition, are noted in this 1971 version. Purchase and rental information for the sound films and silent filmstrips is provided. The broad subject categories are:…

  11. Perception of Dynamic and Static Audiovisual Sequences in 3- and 4-Month-Old Infants

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2008-01-01

    This study investigated perception of audiovisual sequences in 3- and 4-month-old infants. Infants were habituated to sequences consisting of moving/sounding or looming/sounding objects and then tested for their ability to detect changes in the order of the objects, sounds, or both. Results showed that 3-month-olds perceived the order of 3-element…

  12. The Current Status of Federal Audiovisual Policy and How These Policies Affect the National Audiovisual Center.

    ERIC Educational Resources Information Center

    Flood, R. Kevin

    The National Audiovisual Center was established in 1968 to provide a single organizational unit that serves as a central information point on completed audiovisual materials and a central sales point for the distribution of media that were produced by or for federal agencies. This speech describes the services the center can provide users of…

  13. Patient Education in the Doctor's Office: A Trial of Audiovisual Cassettes

    PubMed Central

    Bryant, William H.

    1980-01-01

    Audiovisual tapes for patient education are now available in Canada. This paper summarizes the utilization of 12 tapes in an urban solo family practice over one year. Evaluation of this learning experience by both the physician and the patient showed positive results, in some cases affecting the outcome of the patient's condition. This patient education aid is intended to provide information only and is not subject to learning analysis.

  14. Strategic Learning in Youth with Traumatic Brain Injury: Evidence for Stall in Higher-Order Cognition

    ERIC Educational Resources Information Center

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Cook, Lori G.

    2009-01-01

    Little is known about strategic learning ability in preteens and adolescents with traumatic brain injury (TBI). Strategic learning is the ability to combine and synthesize details to form abstracted gist-based meanings, a higher-order cognitive skill associated with frontal lobe functions and higher classroom performance. Summarization tasks were…

  15. Beyond Course Availability: An Investigation into Order and Concurrency Effects of Undergraduate Programming Courses on Learning.

    ERIC Educational Resources Information Center

    Urbaczewski, Andrew; Urbaczewski, Lise

    The objective of this study was to find the answers to two primary research questions: "Do students learn programming languages better when they are offered in a particular order, such as 4th generation languages before 3rd generation languages?"; and "Do students learn programming languages better when they are taken in separate semesters as…

  16. No Solid Empirical Evidence for the SOLID (Serial Order Learning Impairment) Hypothesis of Dyslexia

    ERIC Educational Resources Information Center

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    This article reports on 2 studies that attempted to replicate the findings of a study by Szmalec, Loncke, Page, and Duyck (2011) on Hebb repetition learning in dyslexic individuals, from which these authors concluded that dyslexics suffer from a deficit in long-term learning of serial order information. In 2 experiments, 1 on adolescents (N = 59)…

  17. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  18. Multilabel image classification via high-order label correlation driven active learning.

    PubMed

    Zhang, Bang; Wang, Yang; Chen, Fang

    2014-03-01

    Supervised machine learning techniques have been applied to multilabel image classification problems with tremendous success. Despite disparate learning mechanisms, their performances heavily rely on the quality of training images. However, the acquisition of training images requires significant efforts from human annotators. This hinders the applications of supervised learning techniques to large scale problems. In this paper, we propose a high-order label correlation driven active learning (HoAL) approach that allows the iterative learning algorithm itself to select the informative example-label pairs from which it learns so as to learn an accurate classifier with less annotation efforts. Four crucial issues are considered by the proposed HoAL: 1) unlike binary cases, the selection granularity for multilabel active learning need to be fined from example to example-label pair; 2) different labels are seldom independent, and label correlations provide critical information for efficient learning; 3) in addition to pair-wise label correlations, high-order label correlations are also informative for multilabel active learning; and 4) since the number of label combinations increases exponentially with respect to the number of labels, an efficient mining method is required to discover informative label correlations. The proposed approach is tested on public data sets, and the empirical results demonstrate its effectiveness. PMID:24723538

  19. Positive Emotion Facilitates Audiovisual Binding

    PubMed Central

    Kitamura, Miho S.; Watanabe, Katsumi; Kitagawa, Norimichi

    2016-01-01

    It has been shown that positive emotions can facilitate integrative and associative information processing in cognitive functions. The present study examined whether emotions in observers can also enhance perceptual integrative processes. We tested 125 participants in total for revealing the effects of emotional states and traits in observers on the multisensory binding between auditory and visual signals. Participants in Experiment 1 observed two identical visual disks moving toward each other, coinciding, and moving away, presented with a brief sound. We found that for participants with lower depressive tendency, induced happy moods increased the width of the temporal binding window of the sound-induced bounce percept in the stream/bounce display, while no effect was found for the participants with higher depressive tendency. In contrast, no effect of mood was observed for a simple audiovisual simultaneity discrimination task in Experiment 2. These results provide the first empirical evidence of a dependency of multisensory binding upon emotional states and traits, revealing that positive emotions can facilitate the multisensory binding processes at a perceptual level. PMID:26834585

  20. Audiovisual integration facilitates unconscious visual scene processing.

    PubMed

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration. PMID:26076179

  1. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  2. Practitioners' Views on Teaching With Audio-Visual Aids.

    ERIC Educational Resources Information Center

    Potter, Earl L., Comp.

    A guide for teaching with audiovisual aids, based on the in-class experiences of 30 faculty members from Memphis State University and Shelby State Community College, is presented. The faculty members represented 20 instructional areas and the range of audiovisual usage included in-class use of traditional audiovisual materials and techniques, the…

  3. Govt. Pubs: U.S. Government Produced Audiovisual Materials.

    ERIC Educational Resources Information Center

    Korman, Richard

    1981-01-01

    Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…

  4. Audiovisual Speech Synchrony Measure: Application to Biometrics

    NASA Astrophysics Data System (ADS)

    Bredin, Hervé; Chollet, Gérard

    2007-12-01

    Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  5. Learning Partnership: Students and Faculty Learning Together to Facilitate Reflection and Higher Order Thinking in a Blended Course

    ERIC Educational Resources Information Center

    McDonald, Paige L.; Straker, Howard O.; Schlumpf, Karen S.; Plack, Margaret M.

    2014-01-01

    This article discusses a learning partnership among faculty and students to influence reflective practice in a blended course. Faculty redesigned a traditional face-to-face (FTF) introductory physician assistant course into a blended course to promote increased reflection and higher order thinking. Early student reflective writing suggested a need…

  6. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  7. A Selection of Audiovisual Materials on Disabilities.

    ERIC Educational Resources Information Center

    Mayo, Kathleen; Rider, Sheila

    Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…

  8. Audiovisual Instruction in Pediatric Pharmacy Practice.

    ERIC Educational Resources Information Center

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  9. Active Methodology in the Audiovisual Communication Degree

    ERIC Educational Resources Information Center

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  10. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  11. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  12. The Status of Audiovisual Materials in Networking.

    ERIC Educational Resources Information Center

    Coty, Patricia Ann

    1983-01-01

    The role of networks in correcting inadequate bibliographic control for audiovisual materials is discussed, citing efforts of Project Media Base, National Information Center for Educational Media, Consortium of University Film Centers, National Library of Medicine, National Agricultural Library, National Film Board of Canada, and bibliographic…

  13. Reduced audiovisual recalibration in the elderly

    PubMed Central

    Chan, Yu Man; Pianta, Michael J.; McKendrick, Allison M.

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22–32 years old) and 15 older (64–74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age. PMID:25221508

  14. A Survey of British Research in Audio-Visual Aids, Supplement No. 2, 1974. (Including Cumulative Index 1945-1974).

    ERIC Educational Resources Information Center

    Rodwell, Susie, Comp.

    The second supplement to the new (1972) edition of the Survey of Research in Audiovisual Aids carried out in Great Britain covers the year 1974. Ten separate sections cover the areas of projected media, non-projected media, sound media, radio, moving pictures, television, teaching machines and programed learning, computer-assisted instruction,…

  15. A Framework for Efficient Structured Max-Margin Learning of High-Order MRF Models.

    PubMed

    Komodakis, Nikos; Xiang, Bo; Paragios, Nikos

    2015-07-01

    We present a very general algorithm for structured prediction learning that is able to efficiently handle discrete MRFs/CRFs (including both pairwise and higher-order models) so long as they can admit a decomposition into tractable subproblems. At its core, it relies on a dual decomposition principle that has been recently employed in the task of MRF optimization. By properly combining such an approach with a max-margin learning method, the proposed framework manages to reduce the training of a complex high-order MRF to the parallel training of a series of simple slave MRFs that are much easier to handle. This leads to a very efficient and general learning scheme that relies on solid mathematical principles. We thoroughly analyze its theoretical properties, and also show that it can yield learning algorithms of increasing accuracy since it naturally allows a hierarchy of convex relaxations to be used for loss-augmented MAP-MRF inference within a max-margin learning approach. Furthermore, it can be easily adapted to take advantage of the special structure that may be present in a given class of MRFs. We demonstrate the generality and flexibility of our approach by testing it on a variety of scenarios, including training of pairwise and higher-order MRFs, training by using different types of regularizers and/or different types of dissimilarity loss functions, as well as by learning of appropriate models for a variety of vision tasks (including high-order models for compact pose-invariant shape priors, knowledge-based segmentation, image denoising, stereo matching as well as high-order Potts MRFs). PMID:26352450

  16. Information-Driven Active Audio-Visual Source Localization.

    PubMed

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application. PMID:26327619

  17. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  18. Mental representations of magnitude and order: a dissociation by sensorimotor learning.

    PubMed

    Badets, Arnaud; Boutin, Arnaud; Heuer, Herbert

    2015-05-01

    Numbers and spatially directed actions share cognitive representations. This assertion is derived from studies that have demonstrated that the processing of small- and large-magnitude numbers facilitates motor behaviors that are directed to the left and right, respectively. However, little is known about the role of sensorimotor learning for such number-action associations. In this study, we show that sensorimotor learning in a serial reaction-time task can modify the associations between number magnitudes and spatially directed movements. Experiments 1 and 3 revealed that this effect is present only for the learned sequence and does not transfer to a novel unpracticed sequence. Experiments 2 and 4 showed that the modification of stimulus-action associations by sensorimotor learning does not occur for other sets of ordered stimuli such as letters of the alphabet. These results strongly suggest that numbers and actions share a common magnitude representation that differs from the common order representation shared by letters and spatially directed actions. Only the magnitude representation, but not the order representation, can be modified episodically by sensorimotor learning. PMID:25813898

  19. Serial-order learning impairment and hypersensitivity-to-interference in dyscalculia.

    PubMed

    De Visscher, Alice; Szmalec, Arnaud; Van Der Linden, Lize; Noël, Marie-Pascale

    2015-11-01

    In the context of heterogeneity, the different profiles of dyscalculia are still hypothetical. This study aims to link features of mathematical difficulties to certain potential etiologies. First, we wanted to test the hypothesis of a serial-order learning deficit in adults with dyscalculia. For this purpose we used a Hebb repetition learning task. Second, we wanted to explore a recent hypothesis according to which hypersensitivity-to-interference hampers the storage of arithmetic facts and leads to a particular profile of dyscalculia. We therefore used interfering and non-interfering repeated sequences in the Hebb paradigm. A final test was used to assess the memory trace of the non-interfering sequence and the capacity to manipulate it. In line with our predictions, we observed that people with dyscalculia who show good conceptual knowledge in mathematics but impaired arithmetic fluency suffer from increased sensitivity-to-interference compared to controls. Secondly, people with dyscalculia who show a deficit in a global mathematical test suffer from a serial-order learning deficit characterized by a slow learning and a quick degradation of the memory trace of the repeated sequence. A serial-order learning impairment could be one of the explanations for a basic numerical deficit, since it is necessary for the number-word sequence acquisition. Among the different profiles of dyscalculia, this study provides new evidence and refinement for two particular profiles. PMID:26218516

  20. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution

    PubMed Central

    Phillips, Steven; Wilson, William H.

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other “structurally related” cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  1. Higher-Order Thinking Development through Adaptive Problem-Based Learning

    ERIC Educational Resources Information Center

    Raiyn, Jamal; Tilchin, Oleg

    2015-01-01

    In this paper we propose an approach to organizing Adaptive Problem-Based Learning (PBL) leading to the development of Higher-Order Thinking (HOT) skills and collaborative skills in students. Adaptability of PBL is expressed by changes in fixed instructor assessments caused by the dynamics of developing HOT skills needed for problem solving,…

  2. Sensitivity to Word Order Cues by Normal and Language/Learning Disabled Adults.

    ERIC Educational Resources Information Center

    Plante, Elena; Gomez, Rebecca; Gerken, LouAnn

    2002-01-01

    Sixteen adults with language/learning disabilities (L/LD) and 16 controls participated in a study testing sensitivity to word order cues that signaled grammatical versus ungrammatical word strings belonging to an artificial grammar. Participants with L/LD performed significantly below the comparison group, suggesting that this skill is problematic…

  3. "What Do I Do Here?": Higher Order Learning Effects of Enhancing Task Instructions

    ERIC Educational Resources Information Center

    Chamberlain, Susanna; Zuvela, Danni

    2014-01-01

    This paper reports the findings of a one-year research project focused on a series of structured interventions aimed at enhancing task instruction to develop students' understanding of higher assessment practices, and encouraging higher order learning. It describes the nature and iterations of the interventions, made into a large-enrolment online…

  4. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution.

    PubMed

    Phillips, Steven; Wilson, William H

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other "structurally related" cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  5. Developing Student-Centered Learning Model to Improve High Order Mathematical Thinking Ability

    ERIC Educational Resources Information Center

    Saragih, Sahat; Napitupulu, Elvis

    2015-01-01

    The purpose of this research was to develop student-centered learning model aiming to improve high order mathematical thinking ability of junior high school students of based on curriculum 2013 in North Sumatera, Indonesia. The special purpose of this research was to analyze and to formulate the purpose of mathematics lesson in high order…

  6. Learning and Generalization on Asynchrony and Order Tasks at Sound Offset: Implications for Underlying Neural Circuitry

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.

    2008-01-01

    Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…

  7. Linking memory and language: Evidence for a serial-order learning impairment in dyslexia.

    PubMed

    Bogaerts, Louisa; Szmalec, Arnaud; Hachmann, Wibke M; Page, Mike P A; Duyck, Wouter

    2015-01-01

    The present study investigated long-term serial-order learning impairments, operationalized as reduced Hebb repetition learning (HRL), in people with dyslexia. In a first multi-session experiment, we investigated both the persistence of a serial-order learning impairment as well as the long-term retention of serial-order representations, both in a group of Dutch-speaking adults with developmental dyslexia and in a matched control group. In a second experiment, we relied on the assumption that HRL mimics naturalistic word-form acquisition and we investigated the lexicalization of novel word-forms acquired through HRL. First, our results demonstrate that adults with dyslexia are fundamentally impaired in the long-term acquisition of serial-order information. Second, dyslexic and control participants show comparable retention of the long-term serial-order representations in memory over a period of 1 month. Third, the data suggest weaker lexicalization of newly acquired word-forms in the dyslexic group. We discuss the integration of these findings into current theoretical views of dyslexia. PMID:26164302

  8. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  9. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    SciTech Connect

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  10. Attributes of Quality in Audiovisual Materials for Health Professionals.

    ERIC Educational Resources Information Center

    Suter, Emanuel; Waddell, Wendy H.

    1981-01-01

    Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)

  11. Dynamic Perceptual Changes in Audiovisual Simultaneity

    PubMed Central

    Kanai, Ryota; Sheth, Bhavin R.; Verstraten, Frans A. J.; Shimojo, Shinsuke

    2007-01-01

    Background The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset. Methodology/Principal Findings Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event. Conclusions/Significance This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event. PMID:18060050

  12. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention. PMID:25341648

  13. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. PMID:26740404

  14. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  15. Audio-visual speech perception: a developmental ERP investigation.

    PubMed

    Knowland, Victoria C P; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S C

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  16. Order short-term memory is not impaired in dyslexia and does not affect orthographic learning.

    PubMed

    Staels, Eva; Van den Broeck, Wim

    2014-01-01

    This article reports two studies that investigate short-term memory (STM) deficits in dyslexic children and explores the relationship between STM and reading acquisition. In the first experiment, 36 dyslexic children and 61 control children performed an item STM task and a serial order STM task. The results of this experiment show that dyslexic children do not suffer from a specific serial order STM deficit. In addition, the results demonstrate that phonological processing skills are as closely related to both item STM and serial order STM. However, non-verbal intelligence was more strongly involved in serial order STM than in item STM. In the second experiment, the same two STM tasks were administered and reading acquisition was assessed by measuring orthographic learning in a group of 188 children. The results of this study show that orthographic learning is exclusively related to item STM and not to order STM. It is concluded that serial order STM is not the right place to look for a causal explanation of reading disability, nor for differences in word reading acquisition. PMID:25294996

  17. Order short-term memory is not impaired in dyslexia and does not affect orthographic learning

    PubMed Central

    Staels, Eva; Van den Broeck, Wim

    2014-01-01

    This article reports two studies that investigate short-term memory (STM) deficits in dyslexic children and explores the relationship between STM and reading acquisition. In the first experiment, 36 dyslexic children and 61 control children performed an item STM task and a serial order STM task. The results of this experiment show that dyslexic children do not suffer from a specific serial order STM deficit. In addition, the results demonstrate that phonological processing skills are as closely related to both item STM and serial order STM. However, non-verbal intelligence was more strongly involved in serial order STM than in item STM. In the second experiment, the same two STM tasks were administered and reading acquisition was assessed by measuring orthographic learning in a group of 188 children. The results of this study show that orthographic learning is exclusively related to item STM and not to order STM. It is concluded that serial order STM is not the right place to look for a causal explanation of reading disability, nor for differences in word reading acquisition. PMID:25294996

  18. Ordering and finding the best of K > 2 supervised learning algorithms.

    PubMed

    Yildiz, Olcay Taner; Alpaydin, Ethem

    2006-03-01

    Given a data set and a number of supervised learning algorithms, we would like to find the algorithm with the smallest expected error. Existing pairwise tests allow a comparison of two algorithms only; range tests and ANOVA check whether multiple algorithms have the same expected error and cannot be used for finding the smallest. We propose a methodology, the MultiTest algorithm, whereby we order supervised learning algorithms taking into account 1) the result of pairwise statistical tests on expected error (what the data tells us), and 2) our prior preferences, e.g., due to complexity. We define the problem in graph-theoretic terms and propose an algorithm to find the "best" learning algorithm in terms of these two criteria, or in the more general case, order learning algorithms in terms of their "goodness." Simulation results using five classification algorithms on 30 data sets indicate the utility of the method. Our proposed method can be generalized to regression and other loss functions by using a suitable pairwise test. PMID:16526425

  19. Disruption of Broca's Area Alters Higher-order Chunking Processing during Perceptual Sequence Learning.

    PubMed

    Alamia, Andrea; Solopchuk, Oleg; D'Ausilio, Alessandro; Van Bever, Violette; Fadiga, Luciano; Olivier, Etienne; Zénon, Alexandre

    2016-03-01

    Because Broca's area is known to be involved in many cognitive functions, including language, music, and action processing, several attempts have been made to propose a unifying theory of its role that emphasizes a possible contribution to syntactic processing. Recently, we have postulated that Broca's area might be involved in higher-order chunk processing during implicit learning of a motor sequence. Chunking is an information-processing mechanism that consists of grouping consecutive items in a sequence and is likely to be involved in all of the aforementioned cognitive processes. Demonstrating a contribution of Broca's area to chunking during the learning of a nonmotor sequence that does not involve language could shed new light on its function. To address this issue, we used offline MRI-guided TMS in healthy volunteers to disrupt the activity of either the posterior part of Broca's area (left Brodmann's area [BA] 44) or a control site just before participants learned a perceptual sequence structured in distinct hierarchical levels. We found that disruption of the left BA 44 increased the processing time of stimuli representing the boundaries of higher-order chunks and modified the chunking strategy. The current results highlight the possible role of the left BA 44 in building up effector-independent representations of higher-order events in structured sequences. This might clarify the contribution of Broca's area in processing hierarchical structures, a key mechanism in many cognitive functions, such as language and composite actions. PMID:26765778

  20. A second-order learning algorithm for multilayer networks based on block Hessian matrix.

    PubMed

    Wang, Yi Jen; Lin, Chin Teng

    1998-12-01

    This article proposes a new second-order learning algorithm for training the multilayer perceptron (MLP) networks. The proposed algorithm is a revised Newton's method. A forward-backward propagation scheme is first proposed for network computation of the Hessian matrix, H, of the output error function of the MLP. A block Hessian matrix, H(b), is then defined to approximate and simplify H. Several lemmas and theorems are proved to uncover the important properties of H and H(b), and verify the good approximation of H(b) to H; H(b) preserves the major properties of H. The theoretic analysis leads to the development of an efficient way for computing the inverse of H(b) recursively. In the proposed second-order learning algorithm, the least squares estimation technique is adopted to further lessen the local minimum problems. The proposed algorithm overcomes not only the drawbacks of the standard backpropagation algorithm (i.e. slow asymptotic convergence rate, bad controllability of convergence accuracy, local minimum problems, and high sensitivity to learning constant), but also the shortcomings of normal Newton's method used on the MLP, such as the lack of network implementation of H, ill representability of the diagonal terms of H, the heavy computation load of the inverse of H, and the requirement of a good initial estimate of the solution (weights). Several example problems are used to demonstrate the efficiency of the proposed learning algorithm. Extensive performance (convergence rate and accuracy) comparisons of the proposed algorithm with other learning schemes (including the standard backpropagation algorithm) are also made. PMID:12662732

  1. Children Using Audiovisual Media for Communication: A New Language?

    ERIC Educational Resources Information Center

    Weiss, Michael

    1982-01-01

    Gives an overview of the Schools Council Communication and Social Skills Project at Brighton Polytechnic in which children ages 9-17 have developed and used audiovisual media such as films, tape-slides, or television programs in the classroom. The effects of audiovisual language on education are briefly discussed. (JJD)

  2. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    ERIC Educational Resources Information Center

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  3. The Practical Audio-Visual Handbook for Teachers.

    ERIC Educational Resources Information Center

    Scuorzo, Herbert E.

    The use of audio/visual media as an aid to instruction is a common practice in today's classroom. Most teachers, however, have little or no formal training in this field and rarely a knowledgeable coordinator to help them. "The Practical Audio-Visual Handbook for Teachers" discusses the types and mechanics of many of these media forms and proposes…

  4. Uses and Abuses of Audio-Visual Aids in Reading.

    ERIC Educational Resources Information Center

    Eggers, Edwin H.

    Audiovisual aids are properly used in reading when they "turn students on," and they are abused when they fail to do so or when they actually "turn students off." General guidelines one could use in sorting usable from unusable aids are (1) Has the teacher saved time by using an audiovisual aid? (2) Is the aid appropriate to the sophistication…

  5. Audiovisual Media and the Disabled. AV in Action 1.

    ERIC Educational Resources Information Center

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2) "Danish…

  6. Audiovisual Integration in High Functioning Adults with Autism

    ERIC Educational Resources Information Center

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  7. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  8. Neural Correlates of Audiovisual Integration of Semantic Category Information

    ERIC Educational Resources Information Center

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  9. The Audio-Visual Marketing Handbook for Independent Schools.

    ERIC Educational Resources Information Center

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  10. Directory of Head Start Audiovisual Professional Training Materials.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.

    The directory contains over 265 annotated listings of audiovisual professional training materials related to the education and care of preschool handicapped children. Noted in the introduction are sources of the contents, such as lists of audiovisual materials disseminated by a hearing/speech center, and instructions for use of the directory.…

  11. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  12. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  13. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  14. Audiovisual Processing in Children with and without Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Mongillo, Elizabeth A.; Irwin, Julia R.; Whalen, D. H.; Klaiman, Cheryl; Carter, Alice S.; Schultz, Robert T.

    2008-01-01

    Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces…

  15. A Technical Communication Course in Graphics and Audiovisuals.

    ERIC Educational Resources Information Center

    Carson, David L.; Harkins, Craig

    1980-01-01

    Describes the development of a course in graphics and audiovisuals as they are applied in technical communication. Includes brief discussions of the course design, general course structure, course objectives, course content, student evaluation, and student reaction. Indicates that the course includes information on theory, graphics, audiovisuals,…

  16. The Audio-Visual Equipment Director. Eighteenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    A cooperative undertaking of the audiovisual industry, this equipment directory for 1972-73 is designed to offer everyone who uses media a convenient, single source of information on all audiovisual equipment on the market today. Photographs, specifications, and prices of more than 1,500 models of equipment are provided, and over 520 manufacturers…

  17. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  18. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of getting to…

  19. Simulated and Virtual Science Laboratory Experiments: Improving Critical Thinking and Higher-Order Learning Skills

    NASA Astrophysics Data System (ADS)

    Simon, Nicole A.

    Virtual laboratory experiments using interactive computer simulations are not being employed as viable alternatives to laboratory science curriculum at extensive enough rates within higher education. Rote traditional lab experiments are currently the norm and are not addressing inquiry, Critical Thinking, and cognition throughout the laboratory experience, linking with educational technologies (Pyatt & Sims, 2007; 2011; Trundle & Bell, 2010). A causal-comparative quantitative study was conducted with 150 learners enrolled at a two-year community college, to determine the effects of simulation laboratory experiments on Higher-Order Learning, Critical Thinking Skills, and Cognitive Load. The treatment population used simulated experiments, while the non-treatment sections performed traditional expository experiments. A comparison was made using the Revised Two-Factor Study Process survey, Motivated Strategies for Learning Questionnaire, and the Scientific Attitude Inventory survey, using a Repeated Measures ANOVA test for treatment or non-treatment. A main effect of simulated laboratory experiments was found for both Higher-Order Learning, [F (1, 148) = 30.32,p = 0.00, eta2 = 0.12] and Critical Thinking Skills, [F (1, 148) = 14.64,p = 0.00, eta 2 = 0.17] such that simulations showed greater increases than traditional experiments. Post-lab treatment group self-reports indicated increased marginal means (+4.86) in Higher-Order Learning and Critical Thinking Skills, compared to the non-treatment group (+4.71). Simulations also improved the scientific skills and mastery of basic scientific subject matter. It is recommended that additional research recognize that learners' Critical Thinking Skills change due to different instructional methodologies that occur throughout a semester.

  20. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed

    Lerner, Itamar; Armstrong, Blair C; Frost, Ram

    2014-11-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  1. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed Central

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  2. Word sense disambiguation via high order of learning in complex networks

    NASA Astrophysics Data System (ADS)

    Silva, Thiago C.; Amancio, Diego R.

    2012-06-01

    Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model.

  3. A Step Into Service Learning Is A Step Into Higher Order Thinking

    NASA Astrophysics Data System (ADS)

    O'Connell, S.

    2010-12-01

    Students, especially beginning college students often consider science courses to be remembering and regurgitating, not creative and of little social relevance. As scientists we know this isn’t true. How do we counteract this sentiment among students? Incorporating service learning, probably better-called project learning into our class is one way. As one “non-science” student, who was taking two science service-learning courses said, “If it’s a service-learning course you know it’s going to be interesting.” Service learning means that some learning takes place in the community. The community component increases understanding of the material being studied, promotes higher order thinking, and provides a benefit for someone else. Students have confirmed that the experience shows them that their knowledge is need by the community and for some, reinforces their commitment to continued civic engagement. I’ll give three examples with the community activity growing in importance in the course and in the community, a single exercise, a small project, and a focus of the class. All of the activities use reflective writing to increase analysis and synthesis. An example of a single exercise could be participating in an event related to your course, for example, a zoning board meeting, or a trip to a wastewater treatment plant. Preparation for the trip should include reading. After the event students synthesize and analyze the activity through a series of questions emphasizing reflection. A two to four class assignment might include expanding the single-day activity or students familiarizing themselves with a course topic, interviewing a person, preparing a podcast of the interview and reflecting upon the experience. The most comprehensive approach is where the class focus is on a community project, e.g. Tim Ku’s geochemistry course (this session). Another class that lends itself easily to a comprehensive service learning approach is Geographic Information

  4. The Use of Audio-Visual Aids in Teaching: A Study in the Saudi Girls Colleges.

    ERIC Educational Resources Information Center

    Al-Sharhan, Jamal A.

    1993-01-01

    A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…

  5. Audiovisual time perception is spatially specific.

    PubMed

    Heron, James; Roach, Neil W; Hanson, James V M; McGraw, Paul V; Whitaker, David

    2012-05-01

    Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways. PMID:22367399

  6. Categorization of Natural Dynamic Audiovisual Scenes

    PubMed Central

    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville

    2014-01-01

    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database. PMID:24788808

  7. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  8. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group. PMID:25324091

  9. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    PubMed Central

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  10. Interpolation-based reduced-order modelling for steady transonic flows via manifold learning

    NASA Astrophysics Data System (ADS)

    Franz, T.; Zimmermann, R.; Görtz, S.; Karcher, N.

    2014-03-01

    This paper presents a parametric reduced-order model (ROM) based on manifold learning (ML) for use in steady transonic aerodynamic applications. The main objective of this work is to derive an efficient ROM that exploits the low-dimensional nonlinear solution manifold to ensure an improved treatment of the nonlinearities involved in varying the inflow conditions to obtain an accurate prediction of shocks. The reduced-order representation of the data is derived using the Isomap ML method, which is applied to a set of sampled computational fluid dynamics (CFD) data. In order to develop a ROM that has the ability to predict approximate CFD solutions at untried parameter combinations, Isomap is coupled with an interpolation method to capture the variations in parameters like the angle of attack or the Mach number. Furthermore, an approximate local inverse mapping from the reduced-order representation to the full CFD solution space is introduced. The proposed ROM, called Isomap+I, is applied to the two-dimensional NACA 64A010 airfoil and to the 3D LANN wing. The results are compared to those obtained by proper orthogonal decomposition plus interpolation (POD+I) and to the full-order CFD model.

  11. The impact of constructivist teaching strategies on the acquisition of higher order cognition and learning

    NASA Astrophysics Data System (ADS)

    Merrill, Alison Saricks

    The purpose of this quasi-experimental quantitative mixed design study was to compare the effectiveness of brain-based teaching strategies versus a traditional lecture format in the acquisition of higher order cognition as determined by test scores. A second purpose was to elicit student feedback about the two teaching approaches. The design was a 2 x 2 x 2 factorial design study with repeated measures on the last factor. The independent variables were type of student, teaching method, and a within group change over time. Dependent variables were a between group comparison of pre-test, post-test gain scores and a within and between group comparison of course examination scores. A convenience sample of students enrolled in medical-surgical nursing was used. One group (n=36) was made up of traditional students and the other group (n=36) consisted of second-degree students. Four learning units were included in this study. Pre- and post-tests were given on the first two units. Course examinations scores from all four units were compared. In one cohort two of the units were taught via lecture format and two using constructivist activities. These methods were reversed for the other cohort. The conceptual basis for this study derives from neuroscience and cognitive psychology. Learning is defined as the growth of new dendrites. Cognitive psychologists view learning as a constructive activity in which new knowledge is built on an internal foundation of existing knowledge. Constructivist teaching strategies are designed to stimulate the brain's natural learning ability. There was a statistically significant difference based on type of teaching strategy (t = -2.078, df = 270, p = .039, d = .25)) with higher mean scores on the examinations covering brain-based learning units. There was no statistical significance based on type of student. Qualitative data collection was conducted in an on-line forum at the end of the semester. Students had overall positive responses about the

  12. Seeing the unseen: Second-order correlation learning in 7- to 11-month-olds.

    PubMed

    Yermolayeva, Yevdokiya; Rakison, David H

    2016-07-01

    We present four experiments with the object-examining procedure that investigated 7-, 9-, and 11-month-olds' ability to associate two object features that were never presented simultaneously. In each experiment, infants were familiarized with a number of 3D objects that incorporated different correlations among the features of those objects and the body of the objects (e.g., Part A and Body 1, and Part B and Body 1). Infants were then tested with objects with a novel body that either possessed both of the parts that were independently correlated with one body during familiarization (e.g., Part A and B on Body 3) or that were attached to two different bodies during familiarization. The experiments demonstrate that infants as young as 7months of age are capable of this kind of second-order correlation learning. Furthermore, by at least 11months of age infants develop a representation for the object that incorporates both of the features they experienced during training. We suggest that the ability to learn second-order correlations represents a powerful but as yet largely unexplored process for generalization in the first years of life. PMID:27038738

  13. Lexical Learning in Bilingual Adults: The Relative Importance of Short-Term Memory for Serial Order and Phonological Knowledge

    ERIC Educational Resources Information Center

    Majerus, Steve; Poncelet, Martine; Van der Linden, Martial; Weekes, Brendan S.

    2008-01-01

    Studies of monolingual speakers have shown a strong association between lexical learning and short-term memory (STM) capacity, especially STM for serial order information. At the same time, studies of bilingual speakers suggest that phonological knowledge is the main factor that drives lexical learning. This study tested these two hypotheses…

  14. Learning to Order Words: A Connectionist Model of Heavy NP Shift and Accessibility Effects in Japanese and English

    ERIC Educational Resources Information Center

    Chang, Franklin

    2009-01-01

    Languages differ from one another and must therefore be learned. Processing biases in word order can also differ across languages. For example, heavy noun phrases tend to be shifted to late sentence positions in English, but to early positions in Japanese. Although these language differences suggest a role for learning, most accounts of these…

  15. An Investigation of Four Hypotheses Concerning the Order by Which 4-Year-Old Children Learn the Alphabet Letters

    ERIC Educational Resources Information Center

    Justice, Laura M.; Pence, Khara; Bowles, Ryan B.; Wiggins, Alice

    2006-01-01

    This study tested four complementary hypotheses to characterize intrinsic and extrinsic influences on the order with which preschool children learn the names of individual alphabet letters. The hypotheses included: (a) "own-name advantage," which states that children learn those letters earlier which occur in their own names, (b) the "letter-order…

  16. Automatic audiovisual integration in speech perception.

    PubMed

    Gentilucci, Maurizio; Cattaneo, Luigi

    2005-11-01

    Two experiments aimed to determine whether features of both the visual and acoustical inputs are always merged into the perceived representation of speech and whether this audiovisual integration is based on either cross-modal binding functions or on imitation. In a McGurk paradigm, observers were required to repeat aloud a string of phonemes uttered by an actor (acoustical presentation of phonemic string) whose mouth, in contrast, mimicked pronunciation of a different string (visual presentation). In a control experiment participants read the same printed strings of letters. This condition aimed to analyze the pattern of voice and the lip kinematics controlling for imitation. In the control experiment and in the congruent audiovisual presentation, i.e. when the articulation mouth gestures were congruent with the emission of the string of phones, the voice spectrum and the lip kinematics varied according to the pronounced strings of phonemes. In the McGurk paradigm the participants were unaware of the incongruence between visual and acoustical stimuli. The acoustical analysis of the participants' spoken responses showed three distinct patterns: the fusion of the two stimuli (the McGurk effect), repetition of the acoustically presented string of phonemes, and, less frequently, of the string of phonemes corresponding to the mouth gestures mimicked by the actor. However, the analysis of the latter two responses showed that the formant 2 of the participants' voice spectra always differed from the value recorded in the congruent audiovisual presentation. It approached the value of the formant 2 of the string of phonemes presented in the other modality, which was apparently ignored. The lip kinematics of the participants repeating the string of phonemes acoustically presented were influenced by the observation of the lip movements mimicked by the actor, but only when pronouncing a labial consonant. The data are discussed in favor of the hypothesis that features of both

  17. Teleconferences and Audiovisual Materials in Earth Science Education

    NASA Astrophysics Data System (ADS)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  18. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  19. Audiovisual associations alter the perception of low-level visual motion.

    PubMed

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  20. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  1. Sight and sound out of synch: Fragmentation and renormalisation of audiovisual integration and subjective timing

    PubMed Central

    Freeman, Elliot D.; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon

    2013-01-01

    The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream–Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing

  2. When audiovisual correspondence disturbs visual processing.

    PubMed

    Hong, Sang Wook; Shim, Won Mok

    2016-05-01

    Multisensory integration is known to create a more robust and reliable perceptual representation of one's environment. Specifically, a congruent auditory input can make a visual stimulus more salient, consequently enhancing the visibility and detection of the visual target. However, it remains largely unknown whether a congruent auditory input can also impair visual processing. In the current study, we demonstrate that temporally congruent auditory input disrupts visual processing, consequently slowing down visual target detection. More importantly, this cross-modal inhibition occurs only when the contrast of visual targets is high. When the contrast of visual targets is low, enhancement of visual target detection is observed, consistent with the prediction based on the principle of inverse effectiveness (PIE) in cross-modal integration. The switch of the behavioral effect of audiovisual interaction from benefit to cost further extends the PIE to encompass the suppressive cross-modal interaction. PMID:26884130

  3. Audiovisual signal compression: the 64/P codecs

    NASA Astrophysics Data System (ADS)

    Jayant, Nikil S.

    1996-02-01

    Video codecs operating at integral multiples of 64 kbps are well-known in visual communications technology as p * 64 systems (p equals 1 to 24). Originally developed as a class of ITU standards, these codecs have served as core technology for videoconferencing, and they have also influenced the MPEG standards for addressable video. Video compression in the above systems is provided by motion compensation followed by discrete cosine transform -- quantization of the residual signal. Notwithstanding the promise of higher bit rates in emerging generations of networks and storage devices, there is a continuing need for facile audiovisual communications over voice band and wireless modems. Consequently, video compression at bit rates lower than 64 kbps is a widely-sought capability. In particular, video codecs operating at rates in the neighborhood of 64, 32, 16, and 8 kbps seem to have great practical value, being matched respectively to the transmission capacities of basic rate ISDN (64 kbps), and voiceband modems that represent high (32 kbps), medium (16 kbps) and low- end (8 kbps) grades in current modem technology. The purpose of this talk is to describe the state of video technology at these transmission rates, without getting too literal about the specific speeds mentioned above. In other words, we expect codecs designed for non- submultiples of 64 kbps, such as 56 kbps or 19.2 kbps, as well as for sub-multiples of 64 kbps, depending on varying constraints on modem rate and the transmission rate needed for the voice-coding part of the audiovisual communications link. The MPEG-4 video standards process is a natural platform on which to examine current capabilities in sub-ISDN rate video coding, and we shall draw appropriately from this process in describing video codec performance. Inherent in this summary is a reinforcement of motion compensation and DCT as viable building blocks of video compression systems, although there is a need for improving signal quality

  4. Audiovisual Enhancement of Classroom Teaching: A Primer for Law Professors.

    ERIC Educational Resources Information Center

    Johnson, Vincent Robert

    1987-01-01

    A discussion of audiovisual instruction in the law school classroom looks at the strengths, weaknesses, equipment and facilities needs and hints for classroom use of overhead projection, audiotapes and videotapes, and slides. (MSE)

  5. Proper Use of Audio-Visual Aids: Essential for Educators.

    ERIC Educational Resources Information Center

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  6. Audiovisual Materials and Programming for Children: A Long Tradition.

    ERIC Educational Resources Information Center

    Doll, Carol A.

    1992-01-01

    Explores the use of audiovisual materials in children's programing at the Seattle Public Library prior to 1920. Kinds of materials discussed include pictures, reflectoscopes, films, sound recordings, lantern slides, and stereographs. (17 references) (MES)

  7. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  8. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  9. Audio-visual assistance in co-creating transition knowledge

    NASA Astrophysics Data System (ADS)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  10. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  11. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  12. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification. PMID:25415948

  13. [Aesthetics of the grotesque and audiovisual production for health education: segregation or empathy? The case of leishmaniasis in Brazil].

    PubMed

    Pimenta, Denise Nacif; Leandro, Anita; Schall, Virgínia Torres

    2007-05-01

    In order to understand audiovisual production on health and disease and the pedagogical effects of health education mediated by educational videos, this article analyzes the audiovisual production on leishmaniasis in Brazil. Fourteen educational videos showed the hegemony of TV aesthetics, particularly a journalistic paradigm with constant use of voice-over, inducing the fixation of meanings. Rather than stimulating critical reflection on the social circumstances of leishmaniasis, the videos' discourse and images promote a banal, non-critical, stigmatized representation of the disease. Individuals with the disease are subjected to visual exposure rather than being involved critically and sensitively as protagonists in prevention and treatment. The article thus presents approaches based on studies of visual and health anthropology, arguing in favor of an innovative approach to the production and utilization of educational videos in health education, mediated through audiovisuals. Health education should respect and engage in dialogue with various cultures, subjectivity, and citizenship, developing an audiovisual aesthetics (in terms of narrative and image) that fosters an educational praxis in the field of collective health. PMID:17486238

  14. Crossmodal and incremental perception of audiovisual cues to emotional speech.

    PubMed

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: 1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests with video clips of emotional utterances collected via a variant of the well-known Velten method. More specifically, we recorded speakers who displayed positive or negative emotions, which were congruent or incongruent with the (emotional) lexical content of the uttered sentence. In order to test this, we conducted two experiments. The first experiment is a perception experiment in which Czech participants, who do not speak Dutch, rate the perceived emotional state of Dutch speakers in a bimodal (audiovisual) or a unimodal (audio- or vision-only) condition. It was found that incongruent emotional speech leads to significantly more extreme perceived emotion scores than congruent emotional speech, where the difference between congruent and incongruent emotional speech is larger for the negative than for the positive conditions. Interestingly, the largest overall differences between congruent and incongruent emotions were found for the audio-only condition, which suggests that posing an incongruent emotion has a particularly strong effect on the spoken realization of emotions. The second experiment uses a gating paradigm to test the recognition speed for various emotional expressions from a speaker's face. In this experiment participants were presented with the same clips as experiment I, but this time presented vision-only. The clips were shown in successive segments (gates) of increasing duration. Results show that participants are surprisingly accurate in their recognition of the various emotions, as they already reach high recognition scores in the first gate (after only 160 ms). Interestingly, the recognition scores

  15. A Second-Order Implicit Knowledge: Its Implications for E-Learning

    ERIC Educational Resources Information Center

    Noaparast, Khosrow Bagheri

    2014-01-01

    The dichotomous epistemology of explicit/implicit knowledge has led to two parallel lines of research; one putting the emphasis on explicit knowledge which has been the main road of e-learning, and the other taking implicit knowledge as the core of learning which has shaped a critical line to the current e-learning. It is argued in this article…

  16. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or... audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep...

  17. Our nation's wetlands (video). Audio-Visual

    SciTech Connect

    Not Available

    1990-01-01

    The Department of the Interior is custodian of approximately 500 million acres of federally owned land and has an important role to play in the management of wetlands. To contribute to the President's goal of no net loss of America's remaining wetlands, the Department of the Interior has initiated a 3-point program consisting of wetlands protection, restoration, and research: Wetlands Protection--Reduce wetlands losses on federally owned lands and encourage state and private landholders to practice wetlands conservation; Wetlands Restoration--Increase wetlands gains through the restoration and creation of wetlands on both public and private lands; Wetlands Research--Provide a foundation of scientific knowledge to guide future actions and decisions about wetlands. The audiovisual is a slide/tape-to-video transfer illustrating the various ways Interior bureaus are working to preserve our Nation's wetlands. The tape features an introduction by Secretary Manuel Lujan on the importance of wetlands and recognizing the benefit of such programs as the North American Waterfowl Management Program.

  18. Neural circuits in auditory and audiovisual memory.

    PubMed

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26656069

  19. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications... published with grant support and, if feasible, on any publication reporting the results of, or describing, a... under subgrants. (2) Audiovisuals produced as research instruments or for documenting experimentation...

  20. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications... published with grant support and, if feasible, on any publication reporting the results of, or describing, a... under subgrants. (2) Audiovisuals produced as research instruments or for documenting experimentation...

  1. Engineering the path to higher-order thinking in elementary education: A problem-based learning approach for STEM integration

    NASA Astrophysics Data System (ADS)

    Rehmat, Abeera Parvaiz

    As we progress into the 21st century, higher-order thinking skills and achievement in science and math are essential to meet the educational requirement of STEM careers. Educators need to think of innovative ways to engage and prepare students for current and future challenges while cultivating an interest among students in STEM disciplines. An instructional pedagogy that can capture students' attention, support interdisciplinary STEM practices, and foster higher-order thinking skills is problem-based learning. Problem-based learning embedded in the social constructivist view of teaching and learning (Savery & Duffy, 1995) promotes self-regulated learning that is enhanced through exploration, cooperative social activity, and discourse (Fosnot, 1996). This quasi-experimental mixed methods study was conducted with 98 fourth grade students. The study utilized STEM content assessments, a standardized critical thinking test, STEM attitude survey, PBL questionnaire, and field notes from classroom observations to investigate the impact of problem-based learning on students' content knowledge, critical thinking, and their attitude towards STEM. Subsequently, it explored students' experiences of STEM integration in a PBL environment. The quantitative results revealed a significant difference between groups in regards to their content knowledge, critical thinking skills, and STEM attitude. From the qualitative results, three themes emerged: learning approaches, increased interaction, and design and engineering implementation. From the overall data set, students described the PBL environment to be highly interactive that prompted them to employ multiple approaches, including design and engineering to solve the problem.

  2. Website Analysis as a Tool for Task-Based Language Learning and Higher Order Thinking in an EFL Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo

    2014-01-01

    Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…

  3. The Impact of Learning Driven Constructs on the Perceived Higher Order Cognitive Skills Improvement: Multimedia vs. Text

    ERIC Educational Resources Information Center

    Bagarukayo, Emily; Weide, Theo; Mbarika, Victor; Kim, Min

    2012-01-01

    The study aims at determining the impact of learning driven constructs on Perceived Higher Order Cognitive Skills (HOCS) improvement when using multimedia and text materials. Perceived HOCS improvement is the attainment of HOCS based on the students' perceptions. The research experiment undertaken using a case study was conducted on 223 students…

  4. Order Effects on Neuropsychological Test Performance of Normal, Learning Disabled and Low Functioning Children: A Cross-Cultural Study.

    ERIC Educational Resources Information Center

    Akande, Adebowale

    2000-01-01

    Investigated possible priming effect of two neuropsychological tests, the Booklet Category Test (BCT) and Wisconsin Card Sorting Test (WCST). Obtained counterbalanced order effects on like-aged sample of 63 South African elementary school students (normally- achieving, low-functioning, learning-disabled). Found a significant effect of set-shifting…

  5. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  6. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  7. Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing

    PubMed Central

    Hwang, Jaewon

    2015-01-01

    During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614

  8. The Black Record: A Selective Discography of Afro-Americana on Audio Discs Held by the Audio/Visual Department, John M. Olin Library.

    ERIC Educational Resources Information Center

    Dain, Bernice, Comp.; Nevin, David, Comp.

    The present revised and expanded edition of this document is an inclusive cumulation. A few items have been included which are on order as new to the collection or as replacements. This discography is intended to serve primarily as a local user's guide. The call number preceding each entry is based on the Audio-Visual Department's own, unique…

  9. (Dis)ordering Teacher Education: From Problem Students to Problem-based Learning.

    ERIC Educational Resources Information Center

    Gale, Trevor

    2000-01-01

    Examines how teacher educators should respond to the growing body of student teachers with learning disabilities, focusing on one case, outlining the situation in Australian universities, and questioning the utility of current definitions of learning disabilities and difficulties, suggesting that teacher educators must rethink their approach to…

  10. Granularity and the Acquisition of Grammatical Gender: How Order-of-Acquisition Affects What Gets Learned

    ERIC Educational Resources Information Center

    Arnon, Inbal; Ramscar, Michael

    2012-01-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost…

  11. Ordering Subjects: Actor-Networks and Intellectual Technologies in Lifelong Learning.

    ERIC Educational Resources Information Center

    Edwards, Richard

    2003-01-01

    Argues that discourses of lifelong learning act as intellectual technologies that construct individuals as subjects in a learning society. Discuses three discourses using actor-network theory: (1) economics/human capital (individuals as accumulators of skills for competitiveness); (2) humanistic psychology (individuals seeking fulfilment through…

  12. PBL-GIS in Secondary Geography Education: Does It Result in Higher-Order Learning Outcomes?

    ERIC Educational Resources Information Center

    Liu, Yan; Bui, Elisabeth N.; Chang, Chew-Hung; Lossman, Hans G.

    2010-01-01

    This article presents research on evaluating problem-based learning using GIS technology in a Singapore secondary school. A quasi-experimental research design was carried to test the PBL pedagogy (PBL-GIS) with an experimental group of students and compare their learning outcomes with a control group who were exposed to PBL but not GIS. The…

  13. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: a qualitative study

    PubMed Central

    2013-01-01

    Background Computerized Provider Order Entry (CPOE) can improve patient safety, quality and efficiency, but hospitals face a host of barriers to adopting CPOE, ranging from resistance among physicians to the cost of the systems. In response to the incentives for meaningful use of health information technology and other market forces, hospitals in the United States are increasingly moving toward the adoption of CPOE. The purpose of this study was to characterize the experiences of hospitals that have successfully implemented CPOE. Methods We used a qualitative approach to observe clinical activities and capture the experiences of physicians, nurses, pharmacists and administrators at five community hospitals in Massachusetts (USA) that adopted CPOE in the past few years. We conducted formal, structured observations of care processes in diverse inpatient settings within each of the hospitals and completed in-depth, semi-structured interviews with clinicians and staff by telephone. After transcribing the audiorecorded interviews, we analyzed the content of the transcripts iteratively, guided by principles of the Immersion and Crystallization analytic approach. Our objective was to identify attitudes, behaviors and experiences that would constitute useful lessons for other hospitals embarking on CPOE implementation. Results Analysis of observations and interviews resulted in findings about the CPOE implementation process in five domains: governance, preparation, support, perceptions and consequences. Successful institutions implemented clear organizational decision-making mechanisms that involved clinicians (governance). They anticipated the need for education and training of a wide range of users (preparation). These hospitals deployed ample human resources for live, in-person training and support during implementation. Successful implementation hinged on the ability of clinical leaders to address and manage perceptions and the fear of change. Implementation proceeded

  14. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2001-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  15. The Effects of Variation on Learning Word Order Rules by Adults with and without Language-Based Learning Disabilities

    ERIC Educational Resources Information Center

    Grunow, Hope; Spaulding, Tammie J.; Gomez, Rebecca L.; Plante, Elena

    2006-01-01

    Non-adjacent dependencies characterize numerous features of English syntax, including certain verb tense structures and subject-verb agreement. This study utilized an artificial language paradigm to examine the contribution of item variability to the learning of these types of dependencies. Adult subjects with and without language-based learning…

  16. Promoting Higher Order Thinking Skills via IPTEACES e-Learning Framework in the Learning of Information Systems Units

    ERIC Educational Resources Information Center

    Isaias, Pedro; Issa, Tomayess; Pena, Nuno

    2014-01-01

    When developing and working with various types of devices from a supercomputer to an iPod Mini, it is essential to consider the issues of Human Computer Interaction (HCI) and Usability. Developers and designers must incorporate HCI, Usability and user satisfaction in their design plans to ensure that systems are easy to learn, effective,…

  17. Audiovisual non-verbal dynamic faces elicit converging fMRI and ERP responses.

    PubMed

    Brefczynski-Lewis, Julie; Lowitszch, Svenja; Parsons, Michael; Lemieux, Susan; Puce, Aina

    2009-05-01

    In an everyday social interaction we automatically integrate another's facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input-a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion. PMID:19384602

  18. A model-based comparison of three theories of audiovisual temporal recalibration.

    PubMed

    Yarrow, Kielan; Minaei, Shora; Arnold, Derek H

    2015-12-01

    Observers change their audio-visual timing judgements after exposure to asynchronous audiovisual signals. The mechanism underlying this temporal recalibration is currently debated. Three broad explanations have been suggested. According to the first, the time it takes for sensory signals to propagate through the brain has changed. The second explanation suggests that decisional criteria used to interpret signal timing have changed, but not time perception itself. A final possibility is that a population of neurones collectively encode relative times, and that exposure to a repeated timing relationship alters the balance of responses in this population. Here, we simplified each of these explanations to its core features in order to produce three corresponding six-parameter models, which generate contrasting patterns of predictions about how simultaneity judgements should vary across four adaptation conditions: No adaptation, synchronous adaptation, and auditory leading/lagging adaptation. We tested model predictions by fitting data from all four conditions simultaneously, in order to assess which model/explanation best described the complete pattern of results. The latency-shift and criterion-change models were better able to explain results for our sample as a whole. The population-code model did, however, account for improved performance following adaptation to a synchronous adapter, and best described the results of a subset of observers who reported least instances of synchrony. PMID:26545105

  19. Audio-Visual Communications, A Tool for the Professional

    ERIC Educational Resources Information Center

    Journal of Environmental Health, 1976

    1976-01-01

    The manner in which the Cuyahoga County, Ohio Department of Environmental Health utilizes audio-visual presentations for communication with business and industry, professional public health agencies and the general public is presented. Subjects including food sanitation, radiation protection and safety are described. (BT)

  20. Audiovisual Market Place 1972-1973. A Multimedia Guide.

    ERIC Educational Resources Information Center

    1972

    The audiovisual (AV) field has been expanding rapidly, although in the last year or so there is evidence of a healthy slowing down in growth. This fourth edition of the guide to the AV industry represents an attempt to keep abreast of the information and to provide a single publication listing the many types of AV organizations and products which…

  1. Audiovisual Integration in Noise by Children and Adults

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  2. Audio-Visual Training in Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Magnan, Annie; Ecalle, Jean

    2006-01-01

    This study tested the effectiveness of audio-visual training in the discrimination of the phonetic feature of voicing on the recognition of written words by young children deemed to at risk of dyslexia (experiment 1) as well as on dyslexic children's phonological skills (experiment 2). In addition, the third experiment studied the effectiveness of…

  3. Selected Bibliography and Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This guide to resource materials on environmental education is in two sections: 1) Selected Bibliography of Printed Materials, compiled in April, 1970; and, 2) Audio-Visual materials, Films and Filmstrips, compiled in February, 1971. 99 book annotations are given with an indicator of elementary, junior or senior high school levels. Other book…

  4. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  5. Multinational Exchange Mechanisms of Educational Audio-Visual Materials. Appendixes.

    ERIC Educational Resources Information Center

    Center of Studies and Realizations for Permanent Education, Paris (France).

    These appendixes contain detailed information about the existing audiovisual material exchanges which served as the basis for the analysis contained in the companion report. Descriptions of the objectives, structure, financing and services of the following national and international organizations are included: (1) Educational Resources Information…

  6. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech

    ERIC Educational Resources Information Center

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  7. Media Literacy and Audiovisual Languages: A Case Study from Belgium

    ERIC Educational Resources Information Center

    Van Bauwel, Sofie

    2008-01-01

    This article examines the use of media in the construction of a "new" language for children. We studied how children acquire and use media literacy skills through their engagement in an educational art project. This media literacy project is rooted in the realm of audiovisual media, within which children's sound and visual worlds are the focus of…

  8. Audiovisual Aids and Techniques in Managerial and Supervisory Training.

    ERIC Educational Resources Information Center

    Rigg, Robinson P.

    An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…

  9. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  10. Audio-Visual Equipment Depreciation. RDU-75-07.

    ERIC Educational Resources Information Center

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…