Science.gov

Sample records for order audiovisual learning

  1. Vicarious Audiovisual Learning in Perfusion Education

    PubMed Central

    Rath, Thomas E.; Holt, David W.

    2010-01-01

    Abstract: Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video. These modules described the setup and operation of the MAQUET ROTAFLOW standalone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today’s perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p < .05). The same was true for test #2 where video learners (n = 10) had an average score of 77% while text learners (n = 9) scored 60% (p < .05). Survey results indicated video learners were more satisfied with their learning module than text learners. Vicarious audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important

  2. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  3. Audiovisuals.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)

  4. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  5. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    PubMed

    Yamamoto, Shinya; Miyazaki, Makoto; Iwano, Takayuki; Kitazawa, Shigeru

    2012-01-01

    After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation). In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration). We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone) in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  6. The Role of Audiovisual Mass Media News in Language Learning

    ERIC Educational Resources Information Center

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  7. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    ERIC Educational Resources Information Center

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  8. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    ERIC Educational Resources Information Center

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  9. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    ERIC Educational Resources Information Center

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  10. Effect on Intended and Incidental Learning from the Use of Learning Objectives with an Audiovisual Presentation.

    ERIC Educational Resources Information Center

    Main, Robert

    This paper reports a controlled field experiment conducted to determine the effects and interaction of five independent variables with an audiovisual slide-tape program: presence of learning objectives, location of learning objectives, type of knowledge, sex of learner, and retention of learning. Participants were university students in a general…

  11. Effect on Intended and Incidental Learning from the Use of Learning Objectives with an Audiovisual Presentation.

    ERIC Educational Resources Information Center

    Main, Robert

    This paper reports a controlled field experiment conducted to determine the effects and interaction of five independent variables with an audiovisual slide-tape program: presence of learning objectives, location of learning objectives, type of knowledge, sex of learner, and retention of learning. Participants were university students in a general…

  12. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Something for Everyone? An Evaluation of the Use of Audio-Visual Resources in Geographical Learning in the UK.

    ERIC Educational Resources Information Center

    McKendrick, John H.; Bowden, Annabel

    1999-01-01

    Reports from a survey of geographers that canvassed experiences using audio-visual resources to support teaching. Suggests that geographical learning has embraced audio-visual resources and that they are employed effectively. Concludes that integration of audio-visual resources into mainstream curriculum is essential to ensure effective and…

  14. Learning from Audio-Visual Media: The Open University Experience. IET Papers on Broadcasting No. 183.

    ERIC Educational Resources Information Center

    Bates, A. W.

    This paper describes how audiovisual media have influenced the way students have learned--or failed to learn--at the Open University at Walton Hall. The paper is based in part on results from a large body of research that has repeatedly demonstrated the interrelatedness of a wide range of factors in determining how or what students learn from…

  15. A Comparative Study of Organizational Characteristics Used in Learning Resources Centers and Traditionally Organized Library and Audio-Visual Service Facilities in Four Minnesota and Wisconsin Senior Colleges.

    ERIC Educational Resources Information Center

    Burlingame, Dwight Francis

    An investigation was made of the organizational characteristics of two college learning resource centers as compared with two traditionally organized college libraries with separate audiovisual units in order to determine the advantages of each organizational type. Interviews, observation, and examination of relevant documents were used to…

  16. Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations.

    PubMed

    Butler, Andrew J; James, Thomas W; James, Karin Harman

    2011-11-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.

  17. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    ERIC Educational Resources Information Center

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  18. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    ERIC Educational Resources Information Center

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  19. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    ERIC Educational Resources Information Center

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  20. Audiovisual synchrony perception for speech and music assessed using a temporal order judgment task.

    PubMed

    Vatakis, Argiro; Spence, Charles

    2006-01-23

    This study investigated people's sensitivity to audiovisual asynchrony in briefly-presented speech and musical videos. A series of speech (letters and syllables) and guitar and piano music (single and double notes) video clips were presented randomly at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. The accuracy of participants' TOJ performance (measured in terms of the just noticeable difference; JND) was significantly better for the speech than for either the guitar or piano music video clips, suggesting that people are more sensitive to asynchrony for speech than for music stimuli. The visual stream had to lead the auditory stream for the point of subjective simultaneity (PSS) to be achieved in the piano music clips while auditory leads were typically required for the guitar music clips. The PSS values obtained for the speech stimuli varied substantially as a function of the particular speech sound presented. These results provide the first empirical evidence regarding people's sensitivity to audiovisual asynchrony for musical stimuli. Our results also demonstrate that people's sensitivity to asynchrony in speech stimuli is better than has been suggested on the basis of previous research using continuous speech streams as stimuli.

  1. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    PubMed

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, p<0.02) were evident in the subscale of transferability of learning from simulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this

  2. Interactive Learning of Spoken Words and Their Meanings Through an Audio-Visual Interface

    NASA Astrophysics Data System (ADS)

    Iwahashi, Naoto

    This paper presents a new interactive learning method for spoken word acquisition through human-machine audio-visual interfaces. During the course of learning, the machine makes a decision about whether an orally input word is a word in the lexicon the machine has learned, using both speech and visual cues. Learning is carried out on-line, incrementally, based on a combination of active and unsupervised learning principles. If the machine judges with a high degree of confidence that its decision is correct, it learns the statistical models of the word and a corresponding image category as its meaning in an unsupervised way. Otherwise, it asks the user a question in an active way. The function used to estimate the degree of confidence is also learned adaptively on-line. Experimental results show that the combination of active and unsupervised learning principles enables the machine and the user to adapt to each other, which makes the learning process more efficient.

  3. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  4. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  5. Time-dependent changes in learning audiovisual associations: a single-trial fMRI study.

    PubMed

    Gonzalo, D; Shallice, T; Dolan, R

    2000-03-01

    Functional imaging studies of learning and memory have primarily focused on stimulus material presented within a single modality (see review by Gabrieli, 1998, Annu. Rev. Psychol. 49: 87-115). In the present study we investigated mechanisms for learning material presented in visual and auditory modalities, using single-trial functional magnetic resonance imaging. We evaluated time-dependent learning effects under two conditions involving presentation of consistent (repeatedly paired in the same combination) or inconsistent (items presented randomly paired) pairs. We also evaluated time-dependent changes for bimodal (auditory and visual) presentations relative to a condition in which auditory stimuli were repeatedly presented alone. Using a time by condition analysis to compare neural responses to consistent versus inconsistent audiovisual pairs, we found significant time-dependent learning effects in medial parietal and right dorsolateral prefrontal cortices. In contrast, time-dependent effects were seen in left angular gyrus, bilateral anterior cingulate gyrus, and occipital areas bilaterally. A comparison of paired (bimodal) versus unpaired (unimodal) conditions was associated with time-dependent changes in posterior hippocampal and superior frontal regions for both consistent and inconsistent pairs. The results provide evidence that associative learning for stimuli presented in different sensory modalities is supported by neural mechanisms similar to those described for other kinds of memory processes. The involvement of posterior hippocampus and superior frontal gyrus in bimodal learning for both consistent and inconsistent pairs supports a putative function for these regions in associative learning independent of sensory modality.

  6. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    PubMed

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes. © 2013 American Association of Anatomists.

  7. Cataloging audiovisual materials: a new dimension.

    PubMed Central

    Knotts, M A; Mueller, D

    1975-01-01

    A new more comprehensive system for cataloging audiovisual materials is described. Existing audiovisual cataloging systems contain mostly descriptive information, publishers' or producers' summaries, and order information. This paper discusses the addition of measurable learning objectives to this standard information, thereby enabling the potential user to determine what can be learned from a particular audiovisual unit. The project included media in nursing only. A committee of faculty and students from the University of Alabama in Birmingham School of Nursing reviewed the materials. The system was field-tested at nursing schools throughout Alabama; the schools offered four different types of programs. The system and its sample product, the AVLOC catalog, were also evaluated by medical librarians, media specialists, and other nursing instructors throughout the United States. PMID:50106

  8. Cataloging audiovisual materials: a new dimension.

    PubMed

    Knotts, M A; Mueller, D

    1975-07-01

    A new more comprehensive system for cataloging audiovisual materials is described. Existing audiovisual cataloging systems contain mostly descriptive information, publishers' or producers' summaries, and order information. This paper discusses the addition of measurable learning objectives to this standard information, thereby enabling the potential user to determine what can be learned from a particular audiovisual unit. The project included media in nursing only. A committee of faculty and students from the University of Alabama in Birmingham School of Nursing reviewed the materials. The system was field-tested at nursing schools throughout Alabama; the schools offered four different types of programs. The system and its sample product, the AVLOC catalog, were also evaluated by medical librarians, media specialists, and other nursing instructors throughout the United States.

  9. Online dissection audio-visual resources for human anatomy: Undergraduate medical students' usage and learning outcomes.

    PubMed

    Choi-Lundberg, Derek L; Cuellar, William A; Williams, Anne-Marie M

    2016-11-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection sessions, representing at most 58% ± 20 of assigned dissectors. Approximately 50% of students accessed all available DAVR by the end of semester, while 10% accessed none. Ninety percent of survey respondents (response rate 58%) generally agreed that DAVR improved their preparation for and learning from dissection when used. Of several learning resources, only DAVR usage had a significant positive correlation (P = 0.002) with feeling prepared for dissection. Results on cadaveric anatomy practical examination questions in year 2 (Y2) and year 3 (Y3) cohorts were 3.9% (P < 0.001, effect size d = -0.32) and 0.3% lower, respectively, with DAVR available compared to previous years. However, there were positive correlations between students' cadaveric anatomy question scores with the number and total time of DAVR viewed (Y2, r = 0.171, 0.090, P = 0.002, n.s., respectively; and Y3, r = 0.257, 0.253, both P < 0.001). Students accessing all DAVR scored 7.2% and 11.8% higher than those accessing none (Y2, P = 0.015, d = 0.48; and Y3, P = 0.005, d = 0.77, respectively). Further development and promotion of DAVR are needed to improve engagement and learning outcomes of more students. Anat Sci Educ 9: 545-554. © 2016 American Association of Anatomists.

  10. Problem Order Implications for Learning

    ERIC Educational Resources Information Center

    Li, Nan; Cohen, William W.; Koedinger, Kenneth R.

    2013-01-01

    The order of problems presented to students is an important variable that affects learning effectiveness. Previous studies have shown that solving problems in a blocked order, in which all problems of one type are completed before the student is switched to the next problem type, results in less effective performance than does solving the problems…

  11. Problem Order Implications for Learning

    ERIC Educational Resources Information Center

    Li, Nan; Cohen, William W.; Koedinger, Kenneth R.

    2013-01-01

    The order of problems presented to students is an important variable that affects learning effectiveness. Previous studies have shown that solving problems in a blocked order, in which all problems of one type are completed before the student is switched to the next problem type, results in less effective performance than does solving the problems…

  12. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    PubMed

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience.

  13. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    PubMed

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.

  14. The Audio-Visual Man.

    ERIC Educational Resources Information Center

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  15. The Planning and Management of Audio-Visual Media in Distance Learning Institutions. Final Report of an IIEP Workshop (Paris, France, September 30-October 3, 1980).

    ERIC Educational Resources Information Center

    Bates, A. W.

    Resulting from a 1980 workshop and a survey of 12 selected distance learning systems (or correspondence study programs), this paper had four aims: (1) to provide a framework to describe distance learning systems using audiovisual media and to locate the 12 surveyed institutions within that framework, (2) to identify common problem areas in the…

  16. Audiovisual Script Writing.

    ERIC Educational Resources Information Center

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  17. The Impact of Audiovisual Feedback on the Learning Outcomes of a Remote and Virtual Laboratory Class

    ERIC Educational Resources Information Center

    Lindsay, E.; Good, M.

    2009-01-01

    Remote and virtual laboratory classes are an increasingly prevalent alternative to traditional hands-on laboratory experiences. One of the key issues with these modes of access is the provision of adequate audiovisual (AV) feedback to the user, which can be a complicated and resource-intensive challenge. This paper reports on a comparison of two…

  18. The Impact of Audiovisual Feedback on the Learning Outcomes of a Remote and Virtual Laboratory Class

    ERIC Educational Resources Information Center

    Lindsay, E.; Good, M.

    2009-01-01

    Remote and virtual laboratory classes are an increasingly prevalent alternative to traditional hands-on laboratory experiences. One of the key issues with these modes of access is the provision of adequate audiovisual (AV) feedback to the user, which can be a complicated and resource-intensive challenge. This paper reports on a comparison of two…

  19. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiologist, 1976

    1976-01-01

    Reviewed is an eight module course in respiratory physiology that utilizes audiovisual cassettes and tapes. The topics include the lung, ventilation, blood flow, and breathing. It is rated excellent in content and quality. (SL)

  20. Lecture Hall and Learning Design: A Survey of Variables, Parameters, Criteria and Interrelationships for Audio-Visual Presentation Systems and Audience Reception.

    ERIC Educational Resources Information Center

    Justin, J. Karl

    Variables and parameters affecting architectural planning and audiovisual systems selection for lecture halls and other learning spaces are surveyed. Interrelationships of factors are discussed, including--(1) design requirements for modern educational techniques as differentiated from cinema, theater or auditorium design, (2) general hall…

  1. Adult Learning Strategies and Approaches (ALSA). Resources for Teachers of Adults. A Handbook of Practical Advice on Audio-Visual Aids and Educational Technology for Tutors and Organisers.

    ERIC Educational Resources Information Center

    Cummins, John; And Others

    This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…

  2. Adult Learning Strategies and Approaches (ALSA). Resources for Teachers of Adults. A Handbook of Practical Advice on Audio-Visual Aids and Educational Technology for Tutors and Organisers.

    ERIC Educational Resources Information Center

    Cummins, John; And Others

    This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…

  3. Audiovisual Materials.

    ERIC Educational Resources Information Center

    American Council on Education, Washington, DC. HEATH/Closer Look Resource Center.

    The fact sheet presents a suggested evaluation framework for use in previewing audiovisual materials, a list of selected resources, and an annotated list of films which were shown at the AHSSPPE '83 Media Fair as part of the national conference of the Association on Handicapped Student Service Programs in Postsecondary Education. Evaluation…

  4. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  5. Audiovisual Review

    ERIC Educational Resources Information Center

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  6. Ordering a Lifeline to Learning.

    ERIC Educational Resources Information Center

    Gross, Bernard F.; Gross, Virginia T.

    1984-01-01

    Provides a chart to organize information needed in ordering science materials/equipment. Includes a checklist for such ordering considerations as approvals required, request route, funding (how paid), descriptions required, criteria related to "best buy," and others. (JM)

  7. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  8. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  9. Audiovisual Interaction

    NASA Astrophysics Data System (ADS)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  10. Enhanced Multisensory Integration and Motor Reactivation after Active Motor Learning of Audiovisual Associations

    ERIC Educational Resources Information Center

    Butler, Andrew J.; James, Thomas W.; James, Karin Harman

    2011-01-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…

  11. Enhanced Multisensory Integration and Motor Reactivation after Active Motor Learning of Audiovisual Associations

    ERIC Educational Resources Information Center

    Butler, Andrew J.; James, Thomas W.; James, Karin Harman

    2011-01-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…

  12. Aspects of Audio-Visual Training in a College of Education, with Special Reference to Radio Learning and Teaching

    ERIC Educational Resources Information Center

    Spires, Norman S.

    1974-01-01

    Article comments on the present needs of teachers in training where audiovisual matters, including radio broadcasting, are concerned and outlines the way in which such training takes place at Southlands College and the objectives sought. (Author)

  13. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults

    PubMed Central

    Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  14. Learning with Hyperlinked Videos--Design Criteria and Efficient Strategies for Using Audiovisual Hypermedia

    ERIC Educational Resources Information Center

    Zahn, Carmen; Barquero, Beatriz; Schwan, Stephan

    2004-01-01

    In this article, we discuss the results of an experiment in which we studied two apparently conflicting classes of design principles for instructional hypervideos: (1) those principles derived from work on multimedia learning that emphasize spatio-temporal contiguity and (2) those originating from work on hypermedia learning that favour…

  15. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  16. Effects of Audiovisual Stimuli on Learning through Microcomputer-Based Class Presentation.

    ERIC Educational Resources Information Center

    Hativa, Nira; Reingold, Aliza

    1987-01-01

    Effectiveness of two versions of computer software used as an electronic blackboard to present geometric concepts to ninth grade students was compared. The experimental version incorporated color, animation, and nonverbal sounds as stimuli; the no-stimulus version was monochrome. Both immediate and delayed learning were significantly better for…

  17. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  18. Making and Using Audiovisuals.

    ERIC Educational Resources Information Center

    Kernan, Margaret; And Others

    1991-01-01

    Includes nine articles that discuss audiovisuals in junior and senior high school libraries. Highlights include skills that various media require and foster; teaching students how to make effective audiovisuals; film production; state media contests; library orientation videos; slide-tape shows; photographic skills; and the use of audiovisuals to…

  19. Audiovisual Speech Recalibration in Children

    ERIC Educational Resources Information Center

    van Linden, Sabine; Vroomen, Jean

    2008-01-01

    In order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on…

  20. Role of Audio and Audio-Visual Materials in Enhancing the Learning Process of Health Science Personnel.

    ERIC Educational Resources Information Center

    Cooper, William

    The material presented here is the result of a review of the Technical Development Plan of the National Library of Medicine, made with the object of describing the role of audiovisual materials in medical education, research and service, and particularly in the continuing education of physicians and allied health personnel. A historical background…

  1. Virtual Attendance: Analysis of an Audiovisual over IP System for Distance Learning in the Spanish Open University (UNED)

    ERIC Educational Resources Information Center

    Vazquez-Cano, Esteban; Fombona, Javier; Fernandez, Alberto

    2013-01-01

    This article analyzes a system of virtual attendance, called "AVIP" (AudioVisual over Internet Protocol), at the Spanish Open University (UNED) in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the…

  2. Learning biases predict a word order universal.

    PubMed

    Culbertson, Jennifer; Smolensky, Paul; Legendre, Géraldine

    2012-03-01

    How recurrent typological patterns, or universals, emerge from the extensive diversity found across the world's languages constitutes a central question for linguistics and cognitive science. Recent challenges to a fundamental assumption of generative linguistics-that universal properties of the human language acquisition faculty constrain the types of grammatical systems which can occur-suggest the need for new types of empirical evidence connecting typology to biases of learners. Using an artificial language learning paradigm in which adult subjects are exposed to a mix of grammatical systems (similar to a period of linguistic change), we show that learners' biases mirror a word-order universal, first proposed by Joseph Greenberg, which constrains typological patterns of adjective, numeral, and noun ordering. We briefly summarize the results of a probabilistic model of the hypothesized biases and their effect on learning, and discuss the broader implications of the results for current theories of the origins of cross-linguistic word-order preferences.

  3. The Audiovisual Portfolio.

    ERIC Educational Resources Information Center

    Williams, Eugene

    1979-01-01

    Describes the development of an audiovisual portfolio, consisting of a student teaching notebook, slide narrative presentation, audiotapes, and a videotape-- valuable for prospective teachers in job interviews. (CMV)

  4. Use of Audiovisual Texts in University Education Process

    ERIC Educational Resources Information Center

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  5. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  6. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  7. Principles of Managing Audiovisual Materials and Equipment. Second Revised Edition.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Biomedical Library.

    This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…

  8. Application and Operation of Audiovisual Equipment in Education.

    ERIC Educational Resources Information Center

    Pula, Fred John

    Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…

  9. Audiovisual Mass Media and Education. TTW 27/28.

    ERIC Educational Resources Information Center

    van Stapele, Peter, Ed.; Sutton, Clifford C., Ed.

    1989-01-01

    The 15 articles in this special issue focus on learning about the audiovisual mass media and education, especially television and film, in relation to various pedagogical and didactical questions. Individual articles are: (1) "Audiovisual Mass Media for Education in Pakistan: Problems and Prospects" (Ahmed Noor Kahn); (2) "The Role of the…

  10. Implicit learning of fifth- and sixth-order sequential probabilities.

    PubMed

    Remillard, Gilbert

    2010-10-01

    Serial reaction time (SRT) task studies have established that people can implicitly learn sequential contingencies as complex as fourth-order probabilities. The present study examined people's ability to learn fifth-order (Experiment 1) and sixth-order (Experiment 2) probabilities. Remarkably, people learned fifth- and sixth-order probabilities. This suggests that the implicit sequence learning mechanism can operate over a range of at least seven sequence elements.

  11. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    ERIC Educational Resources Information Center

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  12. Audiovisual Equipment Self Instruction Manual. Third Edition.

    ERIC Educational Resources Information Center

    Oates, Stanton C.

    An audiovisual equipment manual provides both the means of learning how to operate equipment and information needed to adjust equipment that is not performing properly. The manual covers the basic principles of operation for filmstrip-slide projectors, motion picture projectors, opaque projectors, overhead projectors, portable screens, record…

  13. Promoting Higher Order Thinking Skills Using Inquiry-Based Learning

    ERIC Educational Resources Information Center

    Madhuri, G. V.; Kantamreddi, V. S. S. N; Prakash Goteti, L. N. S.

    2012-01-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in…

  14. Promoting Higher Order Thinking Skills Using Inquiry-Based Learning

    ERIC Educational Resources Information Center

    Madhuri, G. V.; Kantamreddi, V. S. S. N; Prakash Goteti, L. N. S.

    2012-01-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in…

  15. Variable Affix Order: Grammar and Learning

    ERIC Educational Resources Information Center

    Ryan, Kevin M.

    2010-01-01

    While affix ordering often reflects general syntactic or semantic principles, it can also be arbitrary or variable. This article develops a theory of morpheme ordering based on local morphotactic restrictions encoded as weighted bigram constraints. I examine the formal properties of morphotactic systems, including arbitrariness, nontransitivity,…

  16. Time and Order Effects on Causal Learning

    ERIC Educational Resources Information Center

    Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.

    2006-01-01

    Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…

  17. Time and Order Effects on Causal Learning

    ERIC Educational Resources Information Center

    Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.

    2006-01-01

    Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…

  18. Audio/Visual Ratios in Commercial Filmstrips.

    ERIC Educational Resources Information Center

    Gulliford, Nancy L.

    Developed by the Westinghouse Electric Corporation, Video Audio Compressed (VIDAC) is a compressed time, variable rate, still picture television system. This technology made it possible for a centralized library of audiovisual materials to be transmitted over a television channel in very short periods of time. In order to establish specifications…

  19. An audiovisual emotion recognition system

    NASA Astrophysics Data System (ADS)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  20. [Appraisal of Audiovisual Materials.

    ERIC Educational Resources Information Center

    Johnson, Steve

    This document consists of four separate handouts all related to the appraisal of audiovisual (AV) materials: "How to Work with an Appraiser of AV Media: A Convenient Check List for Clients and Their Advisors," helps a client prepare for an appraisal, explaining what is necessary before the appraisal, the appraisal process and its costs,…

  1. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  2. Utilizing New Audiovisual Resources

    ERIC Educational Resources Information Center

    Miller, Glen

    1975-01-01

    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  3. Selected Mental Health Audiovisuals.

    ERIC Educational Resources Information Center

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  4. AUDIOVISUAL EQUIPMENT STANDARDS.

    ERIC Educational Resources Information Center

    PATTERSON, PIERCE E.; AND OTHERS

    RECOMMENDED STANDARDS FOR AUDIOVISUAL EQUIPMENT WERE PRESENTED SEPARATELY FOR GRADES KINDERGARTEN THROUGH SIX, AND FOR JUNIOR AND SENIOR HIGH SCHOOLS. THE ELEMENTARY SCHOOL EQUIPMENT CONSIDERED WAS THE FOLLOWING--CLASSROOM LIGHT CONTROL, MOTION PICTURE PROJECTOR WITH MOBILE STAND AND SPARE REELS, COMBINATION 2 INCH X 2 INCH SLIDE AND FILMSTRIP…

  5. AUDIOVISUAL SERVICES CATALOG.

    ERIC Educational Resources Information Center

    Stockton Unified School District, CA.

    A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…

  6. Audiovisuals in Mental Health.

    ERIC Educational Resources Information Center

    Kenney, Brigitte L.

    1982-01-01

    Describes major uses of film, television, and video in mental health field and discusses problems in selection, acquisition, cataloging, indexing, storage, transfer, care of tapes, patients' rights, and copyright. A sample patient consent form for media recording, borrower's evaluation sheet, sources of audiovisuals and reviews, and 35 references…

  7. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  8. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  9. Mobilising Concepts: Intellectual Technologies in the Ordering of Learning Societies

    ERIC Educational Resources Information Center

    Edwards, Richard

    2004-01-01

    Lifelong learning and a learning society are important planks of European Union (EU) policy. Drawing upon the work of Foucault and Rose, this article examines some of the intellectual technologies that are deployed in the ordering of these policy goals. It argues that research is one such technology and examines EU Framework Projects to explore…

  10. Order or Disorder? Impaired Hebb Learning in Dyslexia

    ERIC Educational Resources Information Center

    Szmalec, Arnaud; Loncke, Maaike; Page, Mike P. A.; Duyck, Wouter

    2011-01-01

    The present study offers an integrative account proposing that dyslexia and its various associated cognitive impairments reflect an underlying deficit in the long-term learning of serial-order information, here operationalized as Hebb repetition learning. In nondyslexic individuals, improved immediate serial recall is typically observed when one…

  11. Mobilising Concepts: Intellectual Technologies in the Ordering of Learning Societies

    ERIC Educational Resources Information Center

    Edwards, Richard

    2004-01-01

    Lifelong learning and a learning society are important planks of European Union (EU) policy. Drawing upon the work of Foucault and Rose, this article examines some of the intellectual technologies that are deployed in the ordering of these policy goals. It argues that research is one such technology and examines EU Framework Projects to explore…

  12. Order or Disorder? Impaired Hebb Learning in Dyslexia

    ERIC Educational Resources Information Center

    Szmalec, Arnaud; Loncke, Maaike; Page, Mike P. A.; Duyck, Wouter

    2011-01-01

    The present study offers an integrative account proposing that dyslexia and its various associated cognitive impairments reflect an underlying deficit in the long-term learning of serial-order information, here operationalized as Hebb repetition learning. In nondyslexic individuals, improved immediate serial recall is typically observed when one…

  13. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    ERIC Educational Resources Information Center

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  14. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    ERIC Educational Resources Information Center

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  15. Can Audio-Visual Media Teach Children Mental Skills?

    ERIC Educational Resources Information Center

    Rovet, Joanne F.

    A study of 128 third graders was made to determine whether audiovisual media does facilitate cognitive development by comparing the effects of learning a mental skill from a filmed demonstration of that skill with learning from more active kinds of experience. The mental skill was the ability to mentally transform mental images by rotating them…

  16. Can Audio-Visual Media Teach Children Mental Skills?

    ERIC Educational Resources Information Center

    Rovet, Joanne F.

    A study of 128 third graders was made to determine whether audiovisual media does facilitate cognitive development by comparing the effects of learning a mental skill from a filmed demonstration of that skill with learning from more active kinds of experience. The mental skill was the ability to mentally transform mental images by rotating them…

  17. Audio-Visual Aids for Pre-School and Primary School Children. A Training Document. Aids to Programming UNICEF Assistance to Education.

    ERIC Educational Resources Information Center

    Narayan, Shankar

    This discussion of the importance and scope of audiovisual aids in the educational programs and activities designed for children in developing countries includes the significance of audiovisual aids in pre-school and primary school education, types of audiovisual aids, learning from pictures, creative art materials, play materials, and problems…

  18. Evaluating an Experimental Audio-Visual Module Programmed to Teach a Basic Anatomical and Physiological System.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    The learning efficiency and effectiveness of teaching an anatomical and physiological system to Air Force enlisted trainees utilizing an experimental audiovisual programed module was compared to that of a commercial linear programed text. It was demonstrated that the audiovisual programed approach to training was more efficient than and equally as…

  19. Rapid temporal recalibration is unique to audiovisual stimuli.

    PubMed

    Van der Burg, Erik; Orchard-Mills, Emily; Alais, David

    2015-01-01

    Following prolonged exposure to asynchronous multisensory signals, the brain adapts to reduce the perceived asynchrony. Here, in three separate experiments, participants performed a synchrony judgment task on audiovisual, audiotactile or visuotactile stimuli and we used inter-trial analyses to examine whether temporal recalibration occurs rapidly on the basis of a single asynchronous trial. Even though all combinations used the same subjects, task and design, temporal recalibration occurred for audiovisual stimuli (i.e., the point of subjective simultaneity depended on the preceding trial's modality order), but none occurred when the same auditory or visual event was combined with a tactile event. Contrary to findings from prolonged adaptation studies showing recalibration for all three combinations, we show that rapid, inter-trial recalibration is unique to audiovisual stimuli. We conclude that recalibration occurs at two different timescales for audiovisual stimuli (fast and slow), but only on a slow timescale for audiotactile and visuotactile stimuli.

  20. Learning in higher order Boltzmann machines using linear response.

    PubMed

    Leisink, M A; Kappen, H J

    2000-04-01

    We introduce an efficient method for learning and inference in higher order Boltzmann machines. The method is based on mean field theory with the linear response correction. We compute the correlations using the exact and the approximated method for a fully connected third order network of ten neurons. In addition, we compare the results of the exact and approximate learning algorithm. Finally we use the presented method to solve the shifter problem. We conclude that the linear response approximation gives good results as long as the couplings are not too large.

  1. Conditional High-Order Boltzmann Machines for Supervised Relation Learning.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

    2017-09-01

    Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

  2. Audiovisual Materials for Teaching Economics.

    ERIC Educational Resources Information Center

    Kronish, Sidney J.

    The Audiovisual Materials Evaluation Committee prepared this report to guide elementary and secondary teachers in their selection of supplementary economic education audiovisual materials. It updates a 1969 publication by adding 107 items to the original guide. Materials included in this report: (1) contain elements of economic analysis--facts,…

  3. Improving physician practice efficiency by learning lab test ordering pattern.

    PubMed

    Cai, Peng; Cao, Feng; Ni, Yuan; Shen, Weijia; Zheng, Tao

    2013-01-01

    The system of electronic medical records (EMR) has been widely used in physician practice. In China, physicians have the time pressure to provide care to many patients in a short period. Improving practice efficiency is a promising direction to mitigate this predicament. During the encounter, ordering lab test is one of the most frequent actions in EMR system. In this paper, our motivation is to save physician's time by providing lab test ordering list to facilitate physician practice. To this end, we developed weight based multi-label classification framework to learn to order lab test for the current encounter according to the historical EMR. Particularly, we propose to learn the physician-specific lab test ordering pattern as different physicians may have different practice behavior on the same population. Experimental results on the real data set demonstrate that physician-specific models can outperform the baseline.

  4. Temporal difference models describe higher-order learning in humans.

    PubMed

    Seymour, Ben; O'Doherty, John P; Dayan, Peter; Koltzenburg, Martin; Jones, Anthony K; Dolan, Raymond J; Friston, Karl J; Frackowiak, Richard S

    2004-06-10

    The ability to use environmental stimuli to predict impending harm is critical for survival. Such predictions should be available as early as they are reliable. In pavlovian conditioning, chains of successively earlier predictors are studied in terms of higher-order relationships, and have inspired computational theories such as temporal difference learning. However, there is at present no adequate neurobiological account of how this learning occurs. Here, in a functional magnetic resonance imaging (fMRI) study of higher-order aversive conditioning, we describe a key computational strategy that humans use to learn predictions about pain. We show that neural activity in the ventral striatum and the anterior insula displays a marked correspondence to the signals for sequential learning predicted by temporal difference models. This result reveals a flexible aversive learning process ideally suited to the changing and uncertain nature of real-world environments. Taken with existing data on reward learning, our results suggest a critical role for the ventral striatum in integrating complex appetitive and aversive predictions to coordinate behaviour.

  5. Promoting higher order thinking skills using inquiry-based learning

    NASA Astrophysics Data System (ADS)

    Madhuri, G. V.; S. S. N Kantamreddi, V.; Goteti, L. N. S. Prakash

    2012-05-01

    Active learning pedagogies play an important role in enhancing higher order cognitive skills among the student community. In this work, a laboratory course for first year engineering chemistry is designed and executed using an inquiry-based learning pedagogical approach. The goal of this module is to promote higher order thinking skills in chemistry. Laboratory exercises are designed based on Bloom's taxonomy and a just-in-time facilitation approach is used. A pre-laboratory discussion outlining the theory of the experiment and its relevance is carried out to enable the students to analyse real-life problems. The performance of the students is assessed based on their ability to perform the experiment, design new experiments and correlate practical utility of the course module with real life. The novelty of the present approach lies in the fact that the learning outcomes of the existing experiments are achieved through establishing a relationship with real-world problems.

  6. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  7. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  8. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  9. The production of audiovisual teaching tools in minimally invasive surgery.

    PubMed

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  10. Computerized physician order entry: lessons learned from the trenches.

    PubMed

    Ramirez, Anne; Carlson, Debra; Estes, Carey

    2010-01-01

    Implementation of computer physician order entry (CPOE) demands planning, teamwork, and a steep learning curve. The nurse-driven team at the hospital unit level is pivotal to a successful launch. This article describes the experience of one NICU in planning, building, training, and implementing CPOE. Pitfalls and lessons learned are described. Communication between the nurse team at the unit and the clinical informatics team needs to be ongoing. Self-paced training with realistic practice scenarios and one-on-one "view then practice" modules help ease the transition. Many issues are not apparent until after CPOE has been implemented, and it is vital to have a mechanism to fix problems quickly. We describe the experience of "going live" and the reality of day-to-day order entry.

  11. Audio-Visual Stories: Pre-Reading Activities for Bilingual Children.

    ERIC Educational Resources Information Center

    Stewart, Oran J.

    1982-01-01

    Discusses learning needs specific to bilingual students that may interfere with their understanding of what they read. Describes several instructional activities in which audiovisual stories are used as prereading exercises to increase student comprehension of English text. (FL)

  12. Spatial Orienting in Complex Audiovisual Environments

    PubMed Central

    Nardo, Davide; Santangelo, Valerio; Macaluso, Emiliano

    2013-01-01

    Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context. This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings. We created short videos portraying everyday life situations that included a lateralised visual event and a co-occurring sound, either on the same or on the opposite side of space. Subjects viewed the videos with or without eye-movements allowed (overt or covert orienting). For each video, visual and auditory saliency maps were used to index the strength of stimulus-driven signals, and eye-movements were used as a measure of the efficacy of the audiovisual events for spatial orienting. Results showed that visual salience modulated activity in higher-order visual areas, whereas auditory salience modulated activity in the superior temporal cortex. Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence). Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience. These patterns of activation were comparable in overt and covert orienting conditions. Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus-driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality. PMID:23616340

  13. School Building Design and Audio-Visual Resources.

    ERIC Educational Resources Information Center

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  14. School Building Design and Audio-Visual Resources.

    ERIC Educational Resources Information Center

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  15. Machine learning using a higher order correlation network

    SciTech Connect

    Lee, Y.C.; Doolen, G.; Chen, H.H.; Sun, G.Z.; Maxwell, T.; Lee, H.Y.

    1986-01-01

    A high-order correlation tensor formalism for neural networks is described. The model can simulate auto associative, heteroassociative, as well as multiassociative memory. For the autoassociative model, simulation results show a drastic increase in the memory capacity and speed over that of the standard Hopfield-like correlation matrix methods. The possibility of using multiassociative memory for a learning universal inference network is also discussed. 9 refs., 5 figs.

  16. The procedural learning of action order is independent of temporal learning.

    PubMed

    Shin, Jacqueline C

    2008-07-01

    How does learning the timing of actions influence our ability to learn the order of actions? A sequence of responses cued by spatial stimuli was learned in a serial reaction time task where the response-to-stimulus intervals (RSIs) were random, constant, or followed a fixed sequence. In this final sequenced-RSI condition, the response and RSI sequences were consistently matched in phase and could be integrated into a common sequence representation. The main result was that the response sequence was learned to a similar degree in all RSI training conditions, indicating that neither the predictability of RSIs nor the integration of the phase-matched response and timing sequences benefited learning of the response sequence. Nevertheless, temporal learning and integration speeded up performance without strengthening the representation of response order.

  17. Predicting perceptual learning from higher-order cortical processing.

    PubMed

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making.

  18. Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony.

    PubMed

    Petrini, Karin; Dahl, Sofia; Rocchesso, Davide; Waadeland, Carl Haakon; Avanzini, Federico; Puce, Aina; Pollick, Frank E

    2009-09-01

    We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.

  19. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration.

  20. Perceptual learning of second order cues for layer decomposition.

    PubMed

    Dövencioğlu, Dicle N; Welchman, Andrew E; Schofield, Andrew J

    2013-01-25

    Luminance variations are ambiguous: they can signal changes in surface reflectance or changes in illumination. Layer decomposition-the process of distinguishing between reflectance and illumination changes-is supported by a range of secondary cues including colour and texture. For an illuminated corrugated, textured surface the shading pattern comprises modulations of luminance (first order, LM) and local luminance amplitude (second-order, AM). The phase relationship between these two signals enables layer decomposition, predicts the perception of reflectance and illumination changes, and has been modelled based on early, fast, feed-forward visual processing (Schofield et al., 2010). However, while inexperienced viewers appreciate this scission at long presentation times, they cannot do so for short presentation durations (250 ms). This might suggest the action of slower, higher-level mechanisms. Here we consider how training attenuates this delay, and whether the resultant learning occurs at a perceptual level. We trained observers to discriminate the components of plaid stimuli that mixed in-phase and anti-phase LM/AM signals over a period of 5 days. After training, the strength of the AM signal needed to differentiate the plaid components fell dramatically, indicating learning. We tested for transfer of learning using stimuli with different spatial frequencies, in-plane orientations, and acutely angled plaids. We report that learning transfers only partially when the stimuli are changed, suggesting that benefits accrue from tuning specific mechanisms, rather than general interpretative processes. We suggest that the mechanisms which support layer decomposition using second-order cues are relatively early, and not inherently slow. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Learning word order at birth: A NIRS study.

    PubMed

    Benavides-Varela, Silvia; Gervain, Judit

    2017-06-01

    In language, the relative order of words in sentences carries important grammatical functions. However, the developmental origins and the neural correlates of the ability to track word order are to date poorly understood. The current study therefore investigates the origins of infants' ability to learn about the sequential order of words, using near-infrared spectroscopy (NIRS) with newborn infants. We have conducted two experiments: one in which a word order change was implemented in 4-word sequences recorded with a list intonation (as if each word was a separate item in a list; list prosody condition, Experiment 1) and one in which the same 4-word sequences were recorded with a well-formed utterance-level prosodic contour (utterance prosody condition, Experiment 2). We found that newborns could detect the violation of the word order in the list prosody condition, but not in the utterance prosody condition. These results suggest that while newborns are already sensitive to word order in linguistic sequences, prosody appears to be a stronger cue than word order for the identification of linguistic units at birth. Copyright © 2017. Published by Elsevier Ltd.

  2. Appreciation of learning environment and development of higher-order learning skills in a problem-based learning medical curriculum.

    PubMed

    Mala-Maung; Abdullah, Azman; Abas, Zoraini W

    2011-12-01

    This cross-sectional study determined the appreciation of the learning environment and development of higher-order learning skills among students attending the Medical Curriculum at the International Medical University, Malaysia which provides traditional and e-learning resources with an emphasis on problem based learning (PBL) and self-directed learning. Of the 708 participants, the majority preferred traditional to e-resources. Students who highly appreciated PBL demonstrated a higher appreciation of e-resources. Appreciation of PBL is positively and significantly correlated with higher-order learning skills, reflecting the inculcation of self-directed learning traits. Implementers must be sensitive to the progress of learners adapting to the higher education environment and innovations, and to address limitations as relevant.

  3. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Kincses, Zsigmond Tamás

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction.

  4. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  5. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  6. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    PubMed

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  8. Towards Postmodernist Television: INA's Audiovisual Magazine Programmes.

    ERIC Educational Resources Information Center

    Boyd-Bowman, Susan

    Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines,…

  9. Towards Postmodernist Television: INA's Audiovisual Magazine Programmes.

    ERIC Educational Resources Information Center

    Boyd-Bowman, Susan

    Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines,…

  10. Serial order learning of subliminal visual stimuli: evidence of multistage learning

    PubMed Central

    Kido, Kaede; Makioka, Shogo

    2015-01-01

    It is widely known that statistical learning of visual symbol sequences occurs implicitly (Kim et al., 2009). In this study, we examined whether people can learn the serial order of visual symbols when they cannot detect them. During the familiarization phase, triplets or quadruplets of novel symbols were presented to one eye under continuous flash suppression (CFS). Perception of the symbols was completely suppressed by the flash patterns presented to the other eye [binocular rivalry (BR)]. During the test phase, the detection latency was faster for symbols located later in the triplets or quadruplets. These results indicate that serial order learning occurs even when the participants cannot detect the stimuli. We also found that detection became slower for the last item of the triplets or quadruplets. This phenomenon occurred only when the participants were familiarized with the symbols under CFS, suggesting that the subsequent symbols interfered with the processing of the target symbol when conscious perception was suppressed. We further examined the nature of the interference and found that it occurred only when the subsequent symbol was not fixed. This result suggests that serial order learning under BR is restricted to fixed order sequences. Statistical learning of the symbols’ transition probability might not occur when the participants cannot detect the symbols. We confirmed this hypothesis by conducting another experiment wherein the transition probability of the symbol sequence was manipulated. PMID:25762947

  11. Learn locally, think globally. Exemplar variability supports higher-order generalization and word learning.

    PubMed

    Perry, Lynn K; Samuelson, Larissa K; Malloy, Lisa M; Schiffer, Ryan N

    2010-12-01

    Research suggests that variability of exemplars supports successful object categorization; however, the scope of variability's support at the level of higher-order generalization remains unexplored. Using a longitudinal study, we examined the role of exemplar variability in first- and second-order generalization in the context of nominal-category learning at an early age. Sixteen 18-month-old children were taught 12 categories. Half of the children were taught with sets of highly similar exemplars; the other half were taught with sets of dissimilar, variable exemplars. Participants' learning and generalization of trained labels and their development of more general word-learning biases were tested. All children were found to have learned labels for trained exemplars, but children trained with variable exemplars generalized to novel exemplars of these categories, developed a discriminating word-learning bias generalizing labels of novel solid objects by shape and labels of nonsolid objects by material, and accelerated in vocabulary acquisition. These findings demonstrate that object variability leads to better abstraction of individual and global category organization, which increases learning outside the laboratory.

  12. Electrophysiological evidence for speech-specific audiovisual integration.

    PubMed

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  13. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  14. Decreased BOLD responses in audiovisual processing.

    PubMed

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-12-29

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively with the perception of the McGurk effect. No brain areas with positively correlated BOLD responses were found. This was unexpected as most studies of audiovisual integration use additivity and super additivity - that is, increased BOLD responses after audiovisual stimulation compared with auditory-only and visual-only stimulation - as criteria for audiovisual integration. We argue that brain areas that show decreased BOLD responses that correlate with an integrated audiovisual percept should not be neglected from consideration as possibly involved in audiovisual integration.

  15. What it Takes for Preschoolers To Learn Sex Abuse Prevention Concepts.

    ERIC Educational Resources Information Center

    Hulsey, Timothy L.; And Others

    1997-01-01

    Examined why preschoolers struggle to learn sexual abuse prevention concepts that children a few years older learn easily. Determined that preschoolers need programs produced specifically for them, programs that are developmentally graduated, present concepts in a logical order, ensure comprehension, and utilize audio-visual production features…

  16. Preventive Maintenance Handbook. Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Educational Products Information Exchange Inst., Stony Brook, NY.

    The preventive maintenance system for audiovisual equipment presented in this handbook is designed by specialists so that it can be used by nonspecialists in school sites. The report offers specific advice on saftey factors and also lists major problems that should not be handled by nonspecialists. Other aspects of a preventive maintenance system…

  17. Audio-Visual Resource Guide.

    ERIC Educational Resources Information Center

    Abrams, Nick, Ed.

    The National Council of Churches has assembled this extensive audiovisual guide for the benefit of schools, churches and community organizations. The guide is categorized into 14 distinct conceptual areas ranging from "God and the Church" to science, the arts, race relations, and national/international critical issues. Though assembled under the…

  18. Criminal Justice Audiovisual Materials Directory.

    ERIC Educational Resources Information Center

    Law Enforcement Assistance Administration (Dept. of Justice), Washington, DC.

    This is the third edition of a source directory of audiovisual materials for the education, training, and orientation of those in the criminal justice field. It is divided into five parts covering the courts, police techniques and training, prevention, prisons and rehabilitation/correction, and public education. Each entry includes a brief…

  19. Criminal Justice Audiovisual Materials Directory.

    ERIC Educational Resources Information Center

    Law Enforcement Assistance Administration (Dept. of Justice), Washington, DC.

    This source directory of audiovisual materials for the education, training, and orientation of those in the criminal justice field is divided into five parts covering the courts, police techniques and training, prevention, prisons and rehabilitation/correction, and public education. Each entry includes a brief description of the product, the time…

  20. Audio-Visual Teaching Machines.

    ERIC Educational Resources Information Center

    Dorsett, Loyd G.

    An audiovisual teaching machine (AVTM) presents programed audio and visual material simultaneously to a student and accepts his response. If his response is correct, the machine proceeds with the lesson; if it is incorrect, the machine so indicates and permits another choice (linear) or automatically presents supplementary material (branching).…

  1. Audio-Visual Materials Catalog.

    ERIC Educational Resources Information Center

    Anderson (M.D.) Hospital and Tumor Inst., Houston, TX.

    This catalog lists 27 audiovisual programs produced by the Department of Medical Communications of the University of Texas M. D. Anderson Hospital and Tumor Institute for public distribution. Video tapes, 16 mm. motion pictures and slide/audio series are presented dealing mostly with cancer and related subjects. The programs are intended for…

  2. Audiovisual Resources for Instructional Development.

    ERIC Educational Resources Information Center

    Wilds, Thomas, Comp.; And Others

    Provided is a compilation of recently annotated audiovisual materials which present techniques, models, or other specific information that can aid in providing comprehensive services to the handicapped. Entries which include a brief description, name of distributor, technical information, and cost are presented alphabetically by title in eight…

  3. Rapid recalibration to audiovisual asynchrony.

    PubMed

    Van der Burg, Erik; Alais, David; Cass, John

    2013-09-11

    To combine information from different sensory modalities, the brain must deal with considerable temporal uncertainty. In natural environments, an external event may produce simultaneous auditory and visual signals yet they will invariably activate the brain asynchronously due to different propagation speeds for light and sound, and different neural response latencies once the signals reach the receptors. One strategy the brain uses to deal with audiovisual timing variation is to adapt to a prevailing asynchrony to help realign the signals. Here, using psychophysical methods in human subjects, we investigate audiovisual recalibration and show that it takes place extremely rapidly without explicit periods of adaptation. Our results demonstrate that exposure to a single, brief asynchrony is sufficient to produce strong recalibration effects. Recalibration occurs regardless of whether the preceding trial was perceived as synchronous, and regardless of whether a response was required. We propose that this rapid recalibration is a fast-acting sensory effect, rather than a higher-level cognitive process. An account in terms of response bias is unlikely due to a strong asymmetry whereby stimuli with vision leading produce bigger recalibrations than audition leading. A fast-acting recalibration mechanism provides a means for overcoming inevitable audiovisual timing variation and serves to rapidly realign signals at onset to maximize the perceptual benefits of audiovisual integration.

  4. Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This list of audiovisual materials for environmental education was prepared by the State of Minnesota, Department of Education, Division of Instruction, to accompany the pilot curriculum in environmental education. The majority of the materials listed are available from the University of Minnesota, or from state or federal agencies. The…

  5. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    PubMed

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (<30 min) letter-speech sound training initializes audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016

  6. A Basic Reference Shelf on Audio-Visual Instruction. A Series One Paper from ERIC at Stanford.

    ERIC Educational Resources Information Center

    Dale, Edgar; Trzebiatowski, Gregory

    Topics in this annotated bibliography on audiovisual instruction include the history of instructional technology, teacher-training, equipment operation, administration of media programs, production of instructional materials, language laboratories, instructional television, programed instruction, communication theory, learning theory, and…

  7. Encouraging Higher-Order Thinking in General Chemistry by Scaffolding Student Learning Using Marzano's Taxonomy

    ERIC Educational Resources Information Center

    Toledo, Santiago; Dubas, Justin M.

    2016-01-01

    An emphasis on higher-order thinking within the curriculum has been a subject of interest in the chemical and STEM literature due to its ability to promote meaningful, transferable learning in students. The systematic use of learning taxonomies could be a practical way to scaffold student learning in order to achieve this goal. This work proposes…

  8. Encouraging Higher-Order Thinking in General Chemistry by Scaffolding Student Learning Using Marzano's Taxonomy

    ERIC Educational Resources Information Center

    Toledo, Santiago; Dubas, Justin M.

    2016-01-01

    An emphasis on higher-order thinking within the curriculum has been a subject of interest in the chemical and STEM literature due to its ability to promote meaningful, transferable learning in students. The systematic use of learning taxonomies could be a practical way to scaffold student learning in order to achieve this goal. This work proposes…

  9. Assessment of Cognitive Load in Multimedia Learning with Dual-Task Methodology: Auditory Load and Modality Effects

    ERIC Educational Resources Information Center

    Brunken, Roland; Plass, Jan L.; Leutner, Detlev

    2004-01-01

    Using cognitive load theory and cognitive theory of multimedia learning as a framework, we conducted two within-subject experiments with 10 participants each in order to investigate (1) if the audiovisual presentation of verbal and pictorial learning materials would lead to a higher demand on phonological cognitive capacities than the visual-only…

  10. Learning in Order To Teach in Chicxulub Puerto, Yucatan, Mexico.

    ERIC Educational Resources Information Center

    Wilber, Cynthia J.

    2000-01-01

    Describes a community-based computer education program for the young people (and adults) of Chicxulub Puerto, a small fishing village in Yucatan, Mexico. Notes the children learn Maya, Spanish, and English in the context of learning computer and telecommunication skills. Concludes that access to the Internet has made a profound difference in a…

  11. Learning in Order To Teach in Chicxulub Puerto, Yucatan, Mexico.

    ERIC Educational Resources Information Center

    Wilber, Cynthia J.

    2000-01-01

    Describes a community-based computer education program for the young people (and adults) of Chicxulub Puerto, a small fishing village in Yucatan, Mexico. Notes the children learn Maya, Spanish, and English in the context of learning computer and telecommunication skills. Concludes that access to the Internet has made a profound difference in a…

  12. Flipping & Clicking Your Way to Higher-Order Learning

    ERIC Educational Resources Information Center

    Garver, Michael S.; Roberts, Brian A.

    2013-01-01

    This innovative system of teaching and learning includes the implementation of two effective learning technologies: podcasting ("flipping") and classroom response systems ("clicking"). Students watch lectures in podcast format before coming to class, which allows the "entire" class period to be devoted to active…

  13. Flipping & Clicking Your Way to Higher-Order Learning

    ERIC Educational Resources Information Center

    Garver, Michael S.; Roberts, Brian A.

    2013-01-01

    This innovative system of teaching and learning includes the implementation of two effective learning technologies: podcasting ("flipping") and classroom response systems ("clicking"). Students watch lectures in podcast format before coming to class, which allows the "entire" class period to be devoted to active…

  14. Bilingualism affects audiovisual phoneme identification

    PubMed Central

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience—i.e., the exposure to a double phonological code during childhood—affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically “deaf” and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech. PMID:25374551

  15. Bilingualism affects audiovisual phoneme identification.

    PubMed

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  16. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    ERIC Educational Resources Information Center

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  17. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    ERIC Educational Resources Information Center

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  18. Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm

    PubMed Central

    Wang, Jinzhao

    2016-01-01

    We present a novel algorithm for optimizing the order in which Chinese characters are learned, one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships. We show that our work outperforms previously published orders and algorithms. Our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order. PMID:27706234

  19. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    PubMed

    Gebru, Israel; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2017-01-05

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semisupervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  20. Improved Computer-Aided Instruction by the Use of Interfaced Random-Access Audio-Visual Equipment. Report on Research Project No. P/24/1.

    ERIC Educational Resources Information Center

    Bryce, C. F. A.; Stewart, A. M.

    A brief review of the characteristics of computer assisted instruction and the attributes of audiovisual media introduces this report on a project designed to improve the effectiveness of computer assisted learning through the incorporation of audiovisual materials. A discussion of the implications of research findings on the design and layout of…

  1. Improved Computer-Aided Instruction by the Use of Interfaced Random-Access Audio-Visual Equipment. Report on Research Project No. P/24/1.

    ERIC Educational Resources Information Center

    Bryce, C. F. A.; Stewart, A. M.

    A brief review of the characteristics of computer assisted instruction and the attributes of audiovisual media introduces this report on a project designed to improve the effectiveness of computer assisted learning through the incorporation of audiovisual materials. A discussion of the implications of research findings on the design and layout of…

  2. Language learning: how much evidence does a child need in order to learn to speak grammatically?

    PubMed

    Page, Karen M

    2004-07-01

    In order to learn grammar from a finite amount of evidence, children must begin with in-built expectations of what is grammatical. They clearly are not born, however, with fully developed grammars. Thus early language development involves refinement of the grammar hypothesis until a target grammar is learnt. Here we address the question of how much evidence is required for this refinement process, by considering two standard learning algorithms and a third algorithm which is presumably as efficient as a child for some value of its memory capacity. We reformulate this algorithm in the context of Chomsky's 'principles and parameters' and show that it is possible to bound the amount of evidence required to almost certainly speak almost grammatically.

  3. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-02-17

    Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects

  4. Is order the defining feature of magnitude representation? An ERP study on learning numerical magnitude and spatial order of artificial symbols.

    PubMed

    Zhao, Hui; Chen, Chuansheng; Zhang, Hongchuan; Zhou, Xinlin; Mei, Leilei; Chen, Chunhui; Chen, Lan; Cao, Zhongyu; Dong, Qi

    2012-01-01

    Using an artificial-number learning paradigm and the ERP technique, the present study investigated neural mechanisms involved in the learning of magnitude and spatial order. 54 college students were divided into 2 groups matched in age, gender, and school major. One group was asked to learn the associations between magnitude (dot patterns) and the meaningless Gibson symbols, and the other group learned the associations between spatial order (horizontal positions on the screen) and the same set of symbols. Results revealed differentiated neural mechanisms underlying the learning processes of symbolic magnitude and spatial order. Compared to magnitude learning, spatial-order learning showed a later and reversed distance effect. Furthermore, an analysis of the order-priming effect showed that order was not inherent to the learning of magnitude. Results of this study showed a dissociation between magnitude and order, which supports the numerosity code hypothesis of mental representations of magnitude.

  5. Is Order the Defining Feature of Magnitude Representation? An ERP Study on Learning Numerical Magnitude and Spatial Order of Artificial Symbols

    PubMed Central

    Zhao, Hui; Chen, Chuansheng; Zhang, Hongchuan; Zhou, Xinlin; Mei, Leilei; Chen, Chunhui; Chen, Lan; Cao, Zhongyu; Dong, Qi

    2012-01-01

    Using an artificial-number learning paradigm and the ERP technique, the present study investigated neural mechanisms involved in the learning of magnitude and spatial order. 54 college students were divided into 2 groups matched in age, gender, and school major. One group was asked to learn the associations between magnitude (dot patterns) and the meaningless Gibson symbols, and the other group learned the associations between spatial order (horizontal positions on the screen) and the same set of symbols. Results revealed differentiated neural mechanisms underlying the learning processes of symbolic magnitude and spatial order. Compared to magnitude learning, spatial-order learning showed a later and reversed distance effect. Furthermore, an analysis of the order-priming effect showed that order was not inherent to the learning of magnitude. Results of this study showed a dissociation between magnitude and order, which supports the numerosity code hypothesis of mental representations of magnitude. PMID:23185363

  6. A Guide for Audiovisual and Newer Media.

    ERIC Educational Resources Information Center

    Carr, William D.

    One of the principal values of audiovisual materials is that they permit the teacher to depart from verbal and printed symbolism, and at the same time to provide a wider real or vicarious experience for pupils. This booklet is designed to aid the teacher in using audiovisual material effectively. It covers visual displays, non-projected materials,…

  7. Solar Energy Audio-Visual Materials.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.

    This directory presents an annotated bibliography of non-print information resources dealing with solar energy. The document is divided by type of audio-visual medium, including: (1) Films, (2) Slides and Filmstrips, and (3) Videotapes. A fourth section provides addresses and telephone numbers of audiovisual aids sources, and lists the page…

  8. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping.

  9. AUDIO-VISUAL INSTRUCTION, AN ADMINISTRATIVE HANDBOOK.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Education, Jefferson City.

    THIS HANDBOOK WAS DESIGNED FOR USE BY SCHOOL ADMINISTRATORS IN DEVELOPING A TOTAL AUDIOVISUAL (AV) PROGRAM. ATTENTION IS GIVEN TO THE IMPORTANCE OF AUDIOVISUAL MEDIA TO EFFECTIVE INSTRUCTION, ADMINISTRATIVE PERSONNEL REQUIREMENTS FOR AN AV PROGRAM, BUDGETING FOR AV INSTRUCTION, PROPER UTILIZATION OF AV MATERIALS, SELECTION OF AV EQUIPMENT AND…

  10. Catalog of Audiovisual Materials Related to Rehabilitation.

    ERIC Educational Resources Information Center

    Mann, Joe, Ed.; Henderson, Jim, Ed.

    An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…

  11. Conceptual Similarity Promotes Generalization of Higher Order Fear Learning

    ERIC Educational Resources Information Center

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to…

  12. Conceptual Similarity Promotes Generalization of Higher Order Fear Learning

    ERIC Educational Resources Information Center

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to…

  13. Audiovisual Temporal Processing and Synchrony Perception in the Rat

    PubMed Central

    Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.

    2017-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately

  14. Audio-visual gender recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  15. High-order feature-based mixture models of classification learning predict individual learning curves and enable personalized teaching

    PubMed Central

    Cohen, Yarden; Schneidman, Elad

    2013-01-01

    Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects’ priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning. PMID:23269833

  16. High-order feature-based mixture models of classification learning predict individual learning curves and enable personalized teaching.

    PubMed

    Cohen, Yarden; Schneidman, Elad

    2013-01-08

    Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects' priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning.

  17. A quantitative dynamical systems approach to differential learning: self-organization principle and order parameter equations.

    PubMed

    Frank, T D; Michelbrink, M; Beckmann, H; Schöllhorn, W I

    2008-01-01

    Differential learning is a learning concept that assists subjects to find individual optimal performance patterns for given complex motor skills. To this end, training is provided in terms of noisy training sessions that feature a large variety of between-exercises differences. In several previous experimental studies it has been shown that performance improvement due to differential learning is higher than due to traditional learning and performance improvement due to differential learning occurs even during post-training periods. In this study we develop a quantitative dynamical systems approach to differential learning. Accordingly, differential learning is regarded as a self-organized process that results in the emergence of subject- and context-dependent attractors. These attractors emerge due to noise-induced bifurcations involving order parameters in terms of learning rates. In contrast, traditional learning is regarded as an externally driven process that results in the emergence of environmentally specified attractors. Performance improvement during post-training periods is explained as an hysteresis effect. An order parameter equation for differential learning involving a fourth-order polynomial potential is discussed explicitly. New predictions concerning the relationship between traditional and differential learning are derived.

  18. Acquired prior knowledge modulates audiovisual integration.

    PubMed

    Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A

    2010-05-01

    Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.

  19. Attention rivalry under irrelevant audiovisual stimulation.

    PubMed

    Feng, Ting; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2008-06-13

    Audiovisual integration has been known to enhance perception; nevertheless, another fundamental audiovisual interaction, i.e. attention rivalry, has not been well investigated. This paper studied the attention rivalry under irrelevant audiovisual stimulation using event-related potential (ERP) and behavioral analysis, and tested the existence of a vision dominated rivalry model. Participants need respond to the target in a bi- or unimodal audiovisual stimulation paradigm. The enhanced amplitude of central P300 under visual target bimodal stimulus indicated that vision demanded more cognitive resources, and the significant amplitude of frontal P200 under bimodal stimulus with non-target auditory stimulus implied that the brain mostly restrained the process of the non-target auditory information. ERP results, together with the analysis of the behavioral data and the subtraction waves, indicated a vision dominated attention rivalry model involved in audiovisual interaction. Furthermore, the latencies of P200 and P300 components implied that audiovisual attention rivalry occurred within the first 300ms after stimulus onset, i.e. significant differences were found in P200 latencies among three target bimodal stimuli, while no difference existed in P300 latencies. Attention shifting and re-directing might be the cause of such early audiovisual rivalry.

  20. Teacher Change in a Changing Moral Order: Learning from Durkheim

    ERIC Educational Resources Information Center

    Slonimsky, Lynne

    2016-01-01

    This paper explores a curriculum paradox that may arise for teachers in post-authoritarian regimes if a radically new curriculum, designed to prepare learners for democratic citizenship, requires them to be autonomous professionals. If teachers were originally schooled and trained under the old regime to follow the orders inscribed in syllabi and…

  1. Teacher Change in a Changing Moral Order: Learning from Durkheim

    ERIC Educational Resources Information Center

    Slonimsky, Lynne

    2016-01-01

    This paper explores a curriculum paradox that may arise for teachers in post-authoritarian regimes if a radically new curriculum, designed to prepare learners for democratic citizenship, requires them to be autonomous professionals. If teachers were originally schooled and trained under the old regime to follow the orders inscribed in syllabi and…

  2. Toddlers infer higher-order relational principles in causal learning.

    PubMed

    Walker, Caren M; Gopnik, Alison

    2014-01-01

    Children make inductive inferences about the causal properties of individual objects from a very young age. When can they infer higher-order relational properties? In three experiments, we examined 18- to 30-month-olds' relational inferences in a causal task. Results suggest that at this age, children are able to infer a higher-order relational causal principle from just a few observations and use this inference to guide their own subsequent actions and bring about a novel causal outcome. Moreover, the children passed a revised version of the relational match-to-sample task that has proven very difficult for nonhuman primates. The findings are considered in light of their implications for understanding the nature of relational and causal reasoning, and their evolutionary origins.

  3. No rapid audiovisual recalibration in adults on the autism spectrum

    PubMed Central

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367

  4. No rapid audiovisual recalibration in adults on the autism spectrum.

    PubMed

    Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David

    2016-02-22

    Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication.

  5. Role of audiovisual synchrony in driving head orienting responses.

    PubMed

    Ho, Cristy; Gray, Rob; Spence, Charles

    2013-06-01

    Many studies now suggest that optimal multisensory integration sometimes occurs under conditions where auditory and visual stimuli are presented asynchronously (i.e. at asynchronies of 100 ms or more). Such observations lead to the suggestion that participants' speeded orienting responses might be enhanced following the presentation of asynchronous (as compared to synchronous) peripheral audiovisual spatial cues. Here, we report a series of three experiments designed to investigate this issue. Upon establishing the effectiveness of bimodal cuing over the best of its unimodal components (Experiment 1), participants had to make speeded head-turning or steering (wheel-turning) responses toward the cued direction (Experiment 2), or an incompatible response away from the cue (Experiment 3), in response to random peripheral audiovisual stimuli presented at stimulus onset asynchronies ranging from -100 to 100 ms. Race model inequality analysis of the results (Experiment 1) revealed different mechanisms underlying the observed multisensory facilitation of participants' head-turning versus steering responses. In Experiments 2 and 3, the synchronous presentation of the component auditory and visual cues gave rise to the largest facilitation of participants' response latencies. Intriguingly, when the participants had to subjectively judge the simultaneity of the audiovisual stimuli, the point of subjective simultaneity occurred when the auditory stimulus lagged behind the visual stimulus by 22 ms. Taken together, these results appear to suggest that the maximally beneficial behavioural (head and manual) orienting responses resulting from peripherally presented audiovisual stimuli occur when the component signals are presented in synchrony. These findings suggest that while the brain uses precise temporal synchrony in order to control its orienting responses, the system that the human brain uses to consciously judge synchrony appears to be less fine tuned.

  6. Audiovisual Materials and Techniques for Teaching Foreign Languages: Recent Trends and Activities.

    ERIC Educational Resources Information Center

    Parks, Carolyn

    Recent experimentation with audio-visual (A-V) materials has provided insight into the language learning process. Researchers and teachers alike have recognized the importance of using A-V materials to achieve goals related to meaningful and relevant communication, retention and recall of language items, non-verbal aspects of communication, and…

  7. Designing between Pedagogies and Cultures: Audio-Visual Chinese Language Resources for Australian Schools

    ERIC Educational Resources Information Center

    Yuan, Yifeng; Shen, Huizhong

    2016-01-01

    This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…

  8. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    ERIC Educational Resources Information Center

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  9. Vocabulary Teaching in Foreign Language via Audiovisual Method Technique of Listening and Following Writing Scripts

    ERIC Educational Resources Information Center

    Bozavli, Ebubekir

    2017-01-01

    The objective is hereby study is to compare the effects of conventional and audiovisual methods on learning efficiency and success of retention with regard to vocabulary teaching in foreign language. Research sample consists of 21 undergraduate and 7 graduate students studying at Department of French Language Teaching, Kazim Karabekir Faculty of…

  10. Audiovisual Materials and Techniques for Teaching Foreign Languages: Recent Trends and Activities.

    ERIC Educational Resources Information Center

    Parks, Carolyn

    Recent experimentation with audio-visual (A-V) materials has provided insight into the language learning process. Researchers and teachers alike have recognized the importance of using A-V materials to achieve goals related to meaningful and relevant communication, retention and recall of language items, non-verbal aspects of communication, and…

  11. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  12. Incidental learning of abstract rules for non-dominant word orders.

    PubMed

    Francis, Andrea P; Schmidt, Gwen L; Carr, Thomas H; Clegg, Benjamin A

    2009-01-01

    One way in which adult second language learners may acquire a word order that differs from their native language word order is through exposure-based incidental learning, but little is known about that process and what constrains it. The current studies examine whether a non-dominant word order can be learned incidentally, and if so, whether the rule can be generalized to new words not previously seen in the non-dominant order. Two studies examined the incidental learning of rules underlying the order of nouns and verbs in three-word strings. The self-timed reading speeds of native English speakers decreased as a result of practice with a non-dominant rule (words ordered either as "verb noun noun" or "noun noun verb"). The same pattern of results was also found for new words ordered according to the previously encountered rule, suggesting learning generalized beyond the specific instances encountered. A second experiment showed such rule learning could also occur when the nouns were replaced with pronounceable pseudowords. Learning was therefore possible in the absence of any pre-existing relationships between the items. Theoretical and educational implications are discussed.

  13. Audio-visual simultaneity judgments.

    PubMed

    Zampini, Massimiliano; Guest, Steve; Shore, David I; Spence, Charles

    2005-04-01

    The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.

  14. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  15. An Audio-Visual Approach to Training

    ERIC Educational Resources Information Center

    Hearnshaw, Trevor

    1977-01-01

    Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)

  16. No Solid Empirical Evidence for the SOLID (Serial Order Learning Impairment) Hypothesis of Dyslexia

    ERIC Educational Resources Information Center

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    This article reports on 2 studies that attempted to replicate the findings of a study by Szmalec, Loncke, Page, and Duyck (2011) on Hebb repetition learning in dyslexic individuals, from which these authors concluded that dyslexics suffer from a deficit in long-term learning of serial order information. In 2 experiments, 1 on adolescents (N = 59)…

  17. Beyond Course Availability: An Investigation into Order and Concurrency Effects of Undergraduate Programming Courses on Learning.

    ERIC Educational Resources Information Center

    Urbaczewski, Andrew; Urbaczewski, Lise

    The objective of this study was to find the answers to two primary research questions: "Do students learn programming languages better when they are offered in a particular order, such as 4th generation languages before 3rd generation languages?"; and "Do students learn programming languages better when they are taken in separate semesters as…

  18. Strategic Learning in Youth with Traumatic Brain Injury: Evidence for Stall in Higher-Order Cognition

    ERIC Educational Resources Information Center

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Cook, Lori G.

    2009-01-01

    Little is known about strategic learning ability in preteens and adolescents with traumatic brain injury (TBI). Strategic learning is the ability to combine and synthesize details to form abstracted gist-based meanings, a higher-order cognitive skill associated with frontal lobe functions and higher classroom performance. Summarization tasks were…

  19. Strategic Learning in Youth with Traumatic Brain Injury: Evidence for Stall in Higher-Order Cognition

    ERIC Educational Resources Information Center

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Cook, Lori G.

    2009-01-01

    Little is known about strategic learning ability in preteens and adolescents with traumatic brain injury (TBI). Strategic learning is the ability to combine and synthesize details to form abstracted gist-based meanings, a higher-order cognitive skill associated with frontal lobe functions and higher classroom performance. Summarization tasks were…

  20. U.S. Government Films, 1971 Supplement; A Catalog of Audiovisual Materials for Rent and Sale by the National Audiovisual Center.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.

    The first edition of the National Audiovisual Center sales catalog (LI 003875) is updated by this supplement. Changes in price and order number as well as deletions from the 1969 edition, are noted in this 1971 version. Purchase and rental information for the sound films and silent filmstrips is provided. The broad subject categories are:…

  1. The audiovisual tau effect in infancy.

    PubMed

    Kawabe, Takahiro; Shirai, Nobu; Wada, Yuji; Miura, Kayo; Kanazawa, So; Yamaguchi, Masami K

    2010-03-03

    Perceived spatial intervals between successive flashes can be distorted by varying the temporal intervals between them (the "tau effect"). A previous study showed that a tau effect for visual flashes could be induced when they were accompanied by auditory beeps with varied temporal intervals (an audiovisual tau effect). We conducted two experiments to investigate whether the audiovisual tau effect occurs in infancy. Forty-eight infants aged 5-8 months took part in this study. In Experiment 1, infants were familiarized with audiovisual stimuli consisting of three pairs of two flashes and three beeps. The onsets of the first and third pairs of flashes were respectively matched to those of the first and third beeps. The onset of the second pair of flashes was separated from that of the second beep by 150 ms. Following the familiarization phase, infants were exposed to a test stimulus composed of two vertical arrays of three static flashes with different spatial intervals. We hypothesized that if the audiovisual tau effect occurred in infancy then infants would preferentially look at the flash array with spatial intervals that would be expected to be different from the perceived spatial intervals between flashes they were exposed to in the familiarization phase. The results of Experiment 1 supported this hypothesis. In Experiment 2, the first and third beeps were removed from the familiarization stimuli, resulting in the disappearance of the audiovisual tau effect. This indicates that the modulation of temporal intervals among flashes by beeps was essential for the audiovisual tau effect to occur (Experiment 2). These results suggest that the cross-modal processing that underlies the audiovisual tau effect occurs even in early infancy. In particular, the results indicate that audiovisual modulation of temporal intervals emerges by 5-8 months of age.

  2. Multilabel image classification via high-order label correlation driven active learning.

    PubMed

    Zhang, Bang; Wang, Yang; Chen, Fang

    2014-03-01

    Supervised machine learning techniques have been applied to multilabel image classification problems with tremendous success. Despite disparate learning mechanisms, their performances heavily rely on the quality of training images. However, the acquisition of training images requires significant efforts from human annotators. This hinders the applications of supervised learning techniques to large scale problems. In this paper, we propose a high-order label correlation driven active learning (HoAL) approach that allows the iterative learning algorithm itself to select the informative example-label pairs from which it learns so as to learn an accurate classifier with less annotation efforts. Four crucial issues are considered by the proposed HoAL: 1) unlike binary cases, the selection granularity for multilabel active learning need to be fined from example to example-label pair; 2) different labels are seldom independent, and label correlations provide critical information for efficient learning; 3) in addition to pair-wise label correlations, high-order label correlations are also informative for multilabel active learning; and 4) since the number of label combinations increases exponentially with respect to the number of labels, an efficient mining method is required to discover informative label correlations. The proposed approach is tested on public data sets, and the empirical results demonstrate its effectiveness.

  3. Perception of Dynamic and Static Audiovisual Sequences in 3- and 4-Month-Old Infants

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2008-01-01

    This study investigated perception of audiovisual sequences in 3- and 4-month-old infants. Infants were habituated to sequences consisting of moving/sounding or looming/sounding objects and then tested for their ability to detect changes in the order of the objects, sounds, or both. Results showed that 3-month-olds perceived the order of 3-element…

  4. Audiovisual segregation in cochlear implant users.

    PubMed

    Landry, Simon; Bacon, Benoit A; Leybaert, Jacqueline; Gagné, Jean-Pierre; Champoux, François

    2012-01-01

    It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  5. Audiovisual Segregation in Cochlear Implant Users

    PubMed Central

    Landry, Simon; Bacon, Benoit A.; Leybaert, Jacqueline; Gagné, Jean-Pierre; Champoux, François

    2012-01-01

    It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users. PMID:22427963

  6. Audiovisual preconditioning enhances the efficacy of an anatomical dissection course: A randomised study.

    PubMed

    Collins, Anne M; Quinlan, Christine S; Dolan, Roisin T; O'Neill, Shane P; Tierney, Paul; Cronin, Kevin J; Ridgway, Paul F

    2015-07-01

    The benefits of incorporating audiovisual materials into learning are well recognised. The outcome of integrating such a modality in to anatomical education has not been reported previously. The aim of this randomised study was to determine whether audiovisual preconditioning is a useful adjunct to learning at an upper limb dissection course. Prior to instruction participants completed a standardised pre course multiple-choice questionnaire (MCQ). The intervention group was subsequently shown a video with a pre-recorded commentary. Following initial dissection, both groups completed a second MCQ. The final MCQ was completed at the conclusion of the course. Statistical analysis confirmed a significant improvement in the performance in both groups over the duration of the three MCQs. The intervention group significantly outperformed their control group counterparts immediately following audiovisual preconditioning and in the post course MCQ. Audiovisual preconditioning is a practical and effective tool that should be incorporated in to future course curricula to optimise learning. Level of evidence This study appraises an intervention in medical education. Kirkpatrick Level 2b (modification of knowledge). Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.

    PubMed

    Remillard, Gilbert

    2011-07-01

    There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.

  8. SUMMARY REPORT ON THE LAKE OKOBOJI AUDIOVISUAL LEADERSHIP CONFERENCE (10TH, MILFORD, IOWA, AUGUST 16-20, 1964).

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    THIS IS A SERIES OF WORKING PAPERS AIMED AT AUDIOVISUAL SPECIALISTS. THE KEYNOTE ADDRESS, COMMITTEE REPORTS, AND CONFERENCE SUMMARY CONCERN LEARNING SPACE AND EDUCATIONAL MEDIA IN INSTRUCTIONAL PROGRAMS. REPORTS DEAL WITH A BEHAVIORAL ANALYSIS APPROACH TO CURRICULUM AND SPACE CONSIDERATIONS, SOURCES OF INFORMATION AND RESEARCH ON LEARNING SPACE,…

  9. An Investigation of Higher-Order Thinking Skills in Smaller Learning Community Social Studies Classrooms

    ERIC Educational Resources Information Center

    Fischer, Christopher; Bol, Linda; Pribesh, Shana

    2011-01-01

    This study investigated the extent to which higher-order thinking skills are promoted in social studies classes in high schools that are implementing smaller learning communities (SLCs). Data collection in this mixed-methods study included classroom observations and in-depth interviews. Findings indicated that higher-order thinking was rarely…

  10. An Investigation of Higher-Order Thinking Skills in Smaller Learning Community Social Studies Classrooms

    ERIC Educational Resources Information Center

    Fischer, Christopher; Bol, Linda; Pribesh, Shana

    2011-01-01

    This study investigated the extent to which higher-order thinking skills are promoted in social studies classes in high schools that are implementing smaller learning communities (SLCs). Data collection in this mixed-methods study included classroom observations and in-depth interviews. Findings indicated that higher-order thinking was rarely…

  11. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  12. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    PubMed

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.

  13. Audiovisual Media Career Ladder, AFSC 231X0.

    DTIC Science & Technology

    1982-01-01

    ISSUE AUDIOVISUAL PRODUCTS OR EQUIPMENT FOR UNIT RETENTION PENDING USE ( RPU ) 55 F136 OPERATE AUDIOVISUAL LIBRARY EQUIPMENT FOR PREVIEWS OR PROJECTIONIST...MAINTAIN CONFIRMATION OR DENIAL OF FILM REQUEST (AF FORM 2014) 47 20 27 F120 ISSUE AUDIOVISUAL PRODUCTS OR EQUIPMENT FOR UNIT RETENTION PENDING USE ( RPU ) 55...AUDIOVISUAL PRODUCTS OR EQUIPMENT FOR UNIT RETENTION PENDING USE ( RPU ) 88 G166 INSPECT TAKE-UP REELS FOR DAMAGE 88 10 A7 TABLE A8 REPRESENTATIVE TASKS

  14. Lip movements affect infants' audiovisual speech perception.

    PubMed

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  15. Govt. Pubs: U.S. Government Produced Audiovisual Materials.

    ERIC Educational Resources Information Center

    Korman, Richard

    1981-01-01

    Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…

  16. Guidelines for Audio-Visual Services in Academic Libraries.

    ERIC Educational Resources Information Center

    Association of Coll. and Research Libraries, Chicago, IL.

    The purpose of these guidelines, prepared by the Audio-Visual Committee of the Association of College and Research Libraries, is to supply basic assistance to those academic libraries that will assume all or a major portion of an audio-visual program. They attempt to assist librarians to recognize and develop their audio-visual responsibilities…

  17. The Effects of an Audio-Visual Training Program in Dyslexic Children

    ERIC Educational Resources Information Center

    Magnan, Annie; Ecalle, Jean; Veuillet, Evelyne; Collet, Lionel

    2004-01-01

    A research project was conducted in order to investigate the usefulness of intensive audio-visual training administered to children with dyslexia involving daily voicing exercises. In this study, the children received such voicing training (experimental group) for 30 min a day, 4 days a week, over 5 weeks. They were assessed on a reading task…

  18. Learning Partnership: Students and Faculty Learning Together to Facilitate Reflection and Higher Order Thinking in a Blended Course

    ERIC Educational Resources Information Center

    McDonald, Paige L.; Straker, Howard O.; Schlumpf, Karen S.; Plack, Margaret M.

    2014-01-01

    This article discusses a learning partnership among faculty and students to influence reflective practice in a blended course. Faculty redesigned a traditional face-to-face (FTF) introductory physician assistant course into a blended course to promote increased reflection and higher order thinking. Early student reflective writing suggested a need…

  19. Learning assignment order of instances for the constrained K-means clustering algorithm.

    PubMed

    Hong, Yi; Kwong, Sam

    2009-04-01

    The sensitivity of the constrained K-means clustering algorithm (Cop-Kmeans) to the assignment order of instances is studied, and a novel assignment order learning method for Cop-Kmeans, termed as clustering Uncertainty-based Assignment order Learning Algorithm (UALA), is proposed in this paper. The main idea of UALA is to rank all instances in the data set according to their clustering uncertainties calculated by using the ensembles of multiple clustering algorithms. Experimental results on several real data sets with artificial instance-level constraints demonstrate that UALA can identify a good assignment order of instances for Cop-Kmeans. In addition, the effects of ensemble sizes on the performance of UALA are analyzed, and the generalization property of Cop-Kmeans is also studied.

  20. Learning to Read in Order to Learn: Building a Program for Upper-Elementary Students

    ERIC Educational Resources Information Center

    Liben, David; Liben, Meredith

    2005-01-01

    After instituting a successful K-2 reading program at the Family Academy in Harlem, the authors of this article faced a new challenge. They set out to learn everything they could about reading comprehension, which they realized was the key to expanding their older students' knowledge of the world. They developed a K-2 reading program. It was…

  1. Mental representations of magnitude and order: a dissociation by sensorimotor learning.

    PubMed

    Badets, Arnaud; Boutin, Arnaud; Heuer, Herbert

    2015-05-01

    Numbers and spatially directed actions share cognitive representations. This assertion is derived from studies that have demonstrated that the processing of small- and large-magnitude numbers facilitates motor behaviors that are directed to the left and right, respectively. However, little is known about the role of sensorimotor learning for such number-action associations. In this study, we show that sensorimotor learning in a serial reaction-time task can modify the associations between number magnitudes and spatially directed movements. Experiments 1 and 3 revealed that this effect is present only for the learned sequence and does not transfer to a novel unpracticed sequence. Experiments 2 and 4 showed that the modification of stimulus-action associations by sensorimotor learning does not occur for other sets of ordered stimuli such as letters of the alphabet. These results strongly suggest that numbers and actions share a common magnitude representation that differs from the common order representation shared by letters and spatially directed actions. Only the magnitude representation, but not the order representation, can be modified episodically by sensorimotor learning. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Serial-order learning impairment and hypersensitivity-to-interference in dyscalculia.

    PubMed

    De Visscher, Alice; Szmalec, Arnaud; Van Der Linden, Lize; Noël, Marie-Pascale

    2015-11-01

    In the context of heterogeneity, the different profiles of dyscalculia are still hypothetical. This study aims to link features of mathematical difficulties to certain potential etiologies. First, we wanted to test the hypothesis of a serial-order learning deficit in adults with dyscalculia. For this purpose we used a Hebb repetition learning task. Second, we wanted to explore a recent hypothesis according to which hypersensitivity-to-interference hampers the storage of arithmetic facts and leads to a particular profile of dyscalculia. We therefore used interfering and non-interfering repeated sequences in the Hebb paradigm. A final test was used to assess the memory trace of the non-interfering sequence and the capacity to manipulate it. In line with our predictions, we observed that people with dyscalculia who show good conceptual knowledge in mathematics but impaired arithmetic fluency suffer from increased sensitivity-to-interference compared to controls. Secondly, people with dyscalculia who show a deficit in a global mathematical test suffer from a serial-order learning deficit characterized by a slow learning and a quick degradation of the memory trace of the repeated sequence. A serial-order learning impairment could be one of the explanations for a basic numerical deficit, since it is necessary for the number-word sequence acquisition. Among the different profiles of dyscalculia, this study provides new evidence and refinement for two particular profiles. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution

    PubMed Central

    Phillips, Steven; Wilson, William H.

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other “structurally related” cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture. PMID:27505411

  4. Learning and Generalization on Asynchrony and Order Tasks at Sound Offset: Implications for Underlying Neural Circuitry

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.

    2008-01-01

    Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…

  5. Changes in Teaching in Order to Help Students with Learning Difficulties Improve in Cypriot Primary Classes

    ERIC Educational Resources Information Center

    Loizou, Florentia

    2016-01-01

    This article aims to explore what changes two Cypriot primary school teachers brought in their teaching in order to help students with learning difficulties improve in their classes. The study was qualitative and used non-participant observation in two primary classrooms in different primary schools and semi-structured interviews with the main…

  6. Developing Higher Order Reading Comprehension Skills in the Learning Disabled Student: A Non-Basal Approach.

    ERIC Educational Resources Information Center

    Solomon, Sheila

    This practicum study evaluated a non-basal, multidisciplinary, multisensory approach to teaching higher order reading comprehension skills to eight fifth-grade learning-disabled students from low socioeconomic minority group backgrounds. The four comprehension skills were: (1) identifying the main idea; (2) determining cause and effect; (3) making…

  7. Effect of Visual Scaffolding and Animation on Students? Performance on Measures of Higher Order Learning

    ERIC Educational Resources Information Center

    Kidwai, Khusro; Munyofu, Mine; Swain, William J; Ausman, Bradley D.; Lin, Huifen; Dwyer, Francis

    2001-01-01

    Animation is being used extensively for instructional purposes; however, it has not been found to be effective on measures of higher order learning (concepts, rules, procedures) within the knowledge acquisition and knowledge integration domains. The purpose of this study was to examine the instructional effectiveness of two visual scaffolding…

  8. Changes in Teaching in Order to Help Students with Learning Difficulties Improve in Cypriot Primary Classes

    ERIC Educational Resources Information Center

    Loizou, Florentia

    2016-01-01

    This article aims to explore what changes two Cypriot primary school teachers brought in their teaching in order to help students with learning difficulties improve in their classes. The study was qualitative and used non-participant observation in two primary classrooms in different primary schools and semi-structured interviews with the main…

  9. The Instructional Effectiveness of Random, Logical and Ordering Theory Generated Learning Hierarchies.

    ERIC Educational Resources Information Center

    Partin, Ronald L.

    The instructional effectiveness of learning programs derived from Gagne-type task analysis, ordering theory analysis, and random sequenced presentation of complex intellectual skills were investigated. Fifty-seven high school students completed a self-instructional program derived from one of the three sequences. No significant differences were…

  10. Second-Order Systematicity of Associative Learning: A Paradox for Classical Compositionality and a Coalgebraic Resolution.

    PubMed

    Phillips, Steven; Wilson, William H

    2016-01-01

    Systematicity is a property of cognitive architecture whereby having certain cognitive capacities implies having certain other "structurally related" cognitive capacities. The predominant classical explanation for systematicity appeals to a notion of common syntactic/symbolic structure among the systematically related capacities. Although learning is a (second-order) cognitive capacity of central interest to cognitive science, a systematic ability to learn certain cognitive capacities, i.e., second-order systematicity, has been given almost no attention in the literature. In this paper, we introduce learned associations as an instance of second-order systematicity that poses a paradox for classical theory, because this form of systematicity involves the kinds of associative constructions that were explicitly rejected by the classical explanation. Our category theoretic explanation of systematicity resolves this problem, because both first and second-order forms of systematicity are derived from the same categorical construction: universal morphisms, which generalize the notion of compositionality of constituent representations to (categorical) compositionality of constituent processes. We derive a model of systematic associative learning based on (co)recursion, which is an instance of a universal construction. These results provide further support for a category theory foundation for cognitive architecture.

  11. Higher-Order Thinking Development through Adaptive Problem-Based Learning

    ERIC Educational Resources Information Center

    Raiyn, Jamal; Tilchin, Oleg

    2015-01-01

    In this paper we propose an approach to organizing Adaptive Problem-Based Learning (PBL) leading to the development of Higher-Order Thinking (HOT) skills and collaborative skills in students. Adaptability of PBL is expressed by changes in fixed instructor assessments caused by the dynamics of developing HOT skills needed for problem solving,…

  12. Reexamining the Literature: The Impact of Peer Tutoring on Higher Order Learning

    ERIC Educational Resources Information Center

    Morano, Stephanie; Riccomini, Paul J.

    2017-01-01

    The body of peer-tutoring intervention research targeting higher order learning (HOL) objectives for middle and high school students with disabilities is reviewed. Peer-tutoring outcomes are synthesized and studies are analyzed to examine the influence of tutoring procedures and study design features on intervention efficacy. Findings show that…

  13. Developing Learning Model Based on Local Culture and Instrument for Mathematical Higher Order Thinking Ability

    ERIC Educational Resources Information Center

    Saragih, Sahat; Napitupulu, E. Elvis; Fauzi, Amin

    2017-01-01

    This research aims to develop a student-centered learning model based on local culture and instrument of mathematical higher order thinking of junior high school students in the frame of the 2013-Curriculum in North Sumatra, Indonesia. The subjects of the research are seventh graders which are taken proportionally random consisted of three public…

  14. Regularized learning of linear ordered-statistic constant false alarm rate filters (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.

    2017-05-01

    The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.

  15. Assessment Choices to Target Higher Order Learning Outcomes: The Power of Academic Empowerment

    ERIC Educational Resources Information Center

    McNeill, Margot; Gosper, Maree; Xu, Jing

    2012-01-01

    Assessment of higher order learning outcomes such as critical thinking, problem solving and creativity has remained a challenge for universities. While newer technologies such as social networking tools have the potential to support these intended outcomes, academics' assessment practice is slow to change. University mission statements and unit…

  16. Acquisition of Higher Order Intellectual Skills Through a Mastery Learning Paradigm.

    ERIC Educational Resources Information Center

    Denton, Jon J.; Seymour, Jo Ann G.

    This investigation was structured to determine if the acquisition of higher order intellectual processes is tenable for teaching candidates when the independent variables are unit pacing and different remediation strategies for mastery learning. Teaching candidates enrolled in a generic teaching methods course constituted the sample. Nearly half…

  17. Developing Student-Centered Learning Model to Improve High Order Mathematical Thinking Ability

    ERIC Educational Resources Information Center

    Saragih, Sahat; Napitupulu, Elvis

    2015-01-01

    The purpose of this research was to develop student-centered learning model aiming to improve high order mathematical thinking ability of junior high school students of based on curriculum 2013 in North Sumatera, Indonesia. The special purpose of this research was to analyze and to formulate the purpose of mathematics lesson in high order…

  18. Learning and Generalization on Asynchrony and Order Tasks at Sound Offset: Implications for Underlying Neural Circuitry

    ERIC Educational Resources Information Center

    Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.

    2008-01-01

    Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…

  19. Generalization of temporal order detection skill learning: two experimental studies of children with dyslexia.

    PubMed

    Murphy, C F B; Schochat, E

    2010-04-01

    The objective of this study was to investigate the phenomenon of learning generalization of a specific skill of auditory temporal processing (temporal order detection) in children with dyslexia. The frequency order discrimination task was applied to children with dyslexia and its effect after training was analyzed in the same trained task and in a different task (duration order discrimination) involving the temporal order discrimination too. During study 1, one group of subjects with dyslexia (N = 12; mean age = 10.9 + or - 1.4 years) was trained and compared to a group of untrained dyslexic children (N = 28; mean age = 10.4 + or - 2.1 years). In study 2, the performance of a trained dyslexic group (N = 18; mean age = 10.1 + or - 2.1 years) was compared at three different times: 2 months before training, at the beginning of training, and at the end of training. Training was carried out for 2 months using a computer program responsible for training frequency ordering skill. In study 1, the trained group showed significant improvement after training only for frequency ordering task compared to the untrained group (P < 0.001). In study 2, the children showed improvement in the last interval in both frequency ordering (P < 0.001) and duration ordering (P = 0.01) tasks. These results showed differences regarding the presence of learning generalization of temporal order detection, since there was generalization of learning in only one of the studies. The presence of methodological differences between the studies, as well as the relationship between trained task and evaluated tasks, are discussed.

  20. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  1. Audiovisual Instruction in Pediatric Pharmacy Practice.

    ERIC Educational Resources Information Center

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  2. Longevity and Depreciation of Audiovisual Equipment.

    ERIC Educational Resources Information Center

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  3. Assessing Teacher Competencies with the Audiovisual Portfolio.

    ERIC Educational Resources Information Center

    Williams, Eugene

    1979-01-01

    The audiovisual portfolio helps beginning teachers demonstrate qualities that might not show up in a traditional job interview. Included in the portfolio are a student teaching notebook, a slide narrative presentation, audiocassette tapes, and a videotape of lessons taught during student teaching. (Author)

  4. A Selection of Audiovisual Materials on Disabilities.

    ERIC Educational Resources Information Center

    Mayo, Kathleen; Rider, Sheila

    Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…

  5. Audio-Visual Speech Perception Is Special

    ERIC Educational Resources Information Center

    Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.

    2005-01-01

    In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…

  6. Attention to touch weakens audiovisual speech integration.

    PubMed

    Alsius, Agnès; Navarra, Jordi; Soto-Faraco, Salvador

    2007-11-01

    One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839-843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.

  7. Audiovisual Facilities in Schools in Japan Today.

    ERIC Educational Resources Information Center

    Ministry of Education, Tokyo (Japan).

    This paper summarizes the findings of a national survey conducted for the Ministry of Education, Science, and Culture in 1986 to determine the kinds of audiovisual equipment available in Japanese schools, together with the rate of diffusion for the various types of equipment, the amount of teacher participation in training for their use, and the…

  8. A Selection of Audiovisual Materials on Disabilities.

    ERIC Educational Resources Information Center

    Mayo, Kathleen; Rider, Sheila

    Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…

  9. Active Methodology in the Audiovisual Communication Degree

    ERIC Educational Resources Information Center

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  10. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  11. Audiovisual Instruction in Pediatric Pharmacy Practice.

    ERIC Educational Resources Information Center

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  12. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  13. Audiovisual Facilities in Schools in Japan Today.

    ERIC Educational Resources Information Center

    Ministry of Education, Tokyo (Japan).

    This paper summarizes the findings of a national survey conducted for the Ministry of Education, Science, and Culture in 1986 to determine the kinds of audiovisual equipment available in Japanese schools, together with the rate of diffusion for the various types of equipment, the amount of teacher participation in training for their use, and the…

  14. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  15. An Acquired Deficit of Audiovisual Speech Processing

    ERIC Educational Resources Information Center

    Hamilton, Roy H.; Shenton, Jeffrey T.; Coslett, H. Branch

    2006-01-01

    We report a 53-year-old patient (AWF) who has an acquired deficit of audiovisual speech integration, characterized by a perceived temporal mismatch between speech sounds and the sight of moving lips. AWF was less accurate on an auditory digit span task with vision of a speaker's face as compared to a condition in which no visual information from…

  16. Linking memory and language: Evidence for a serial-order learning impairment in dyslexia.

    PubMed

    Bogaerts, Louisa; Szmalec, Arnaud; Hachmann, Wibke M; Page, Mike P A; Duyck, Wouter

    2015-01-01

    The present study investigated long-term serial-order learning impairments, operationalized as reduced Hebb repetition learning (HRL), in people with dyslexia. In a first multi-session experiment, we investigated both the persistence of a serial-order learning impairment as well as the long-term retention of serial-order representations, both in a group of Dutch-speaking adults with developmental dyslexia and in a matched control group. In a second experiment, we relied on the assumption that HRL mimics naturalistic word-form acquisition and we investigated the lexicalization of novel word-forms acquired through HRL. First, our results demonstrate that adults with dyslexia are fundamentally impaired in the long-term acquisition of serial-order information. Second, dyslexic and control participants show comparable retention of the long-term serial-order representations in memory over a period of 1 month. Third, the data suggest weaker lexicalization of newly acquired word-forms in the dyslexic group. We discuss the integration of these findings into current theoretical views of dyslexia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  18. HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud

    SciTech Connect

    An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.

    1988-01-01

    A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.

  19. Information-Driven Active Audio-Visual Source Localization.

    PubMed

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

  20. A Survey of British Research in Audio-Visual Aids, Supplement No. 2, 1974. (Including Cumulative Index 1945-1974).

    ERIC Educational Resources Information Center

    Rodwell, Susie, Comp.

    The second supplement to the new (1972) edition of the Survey of Research in Audiovisual Aids carried out in Great Britain covers the year 1974. Ten separate sections cover the areas of projected media, non-projected media, sound media, radio, moving pictures, television, teaching machines and programed learning, computer-assisted instruction,…

  1. Ordering and finding the best of K > 2 supervised learning algorithms.

    PubMed

    Yildiz, Olcay Taner; Alpaydin, Ethem

    2006-03-01

    Given a data set and a number of supervised learning algorithms, we would like to find the algorithm with the smallest expected error. Existing pairwise tests allow a comparison of two algorithms only; range tests and ANOVA check whether multiple algorithms have the same expected error and cannot be used for finding the smallest. We propose a methodology, the MultiTest algorithm, whereby we order supervised learning algorithms taking into account 1) the result of pairwise statistical tests on expected error (what the data tells us), and 2) our prior preferences, e.g., due to complexity. We define the problem in graph-theoretic terms and propose an algorithm to find the "best" learning algorithm in terms of these two criteria, or in the more general case, order learning algorithms in terms of their "goodness." Simulation results using five classification algorithms on 30 data sets indicate the utility of the method. Our proposed method can be generalized to regression and other loss functions by using a suitable pairwise test.

  2. Media Programmed Learning Systems

    ERIC Educational Resources Information Center

    Butler, Lucius; Inoue, Kazuo

    1972-01-01

    The science department of a Japanese technical high school developed a programmed learning system for first and second year science courses. Utilized are various machines and audiovisual equipment in a total media system. (Author/TS)

  3. Disruption of Broca's Area Alters Higher-order Chunking Processing during Perceptual Sequence Learning.

    PubMed

    Alamia, Andrea; Solopchuk, Oleg; D'Ausilio, Alessandro; Van Bever, Violette; Fadiga, Luciano; Olivier, Etienne; Zénon, Alexandre

    2016-03-01

    Because Broca's area is known to be involved in many cognitive functions, including language, music, and action processing, several attempts have been made to propose a unifying theory of its role that emphasizes a possible contribution to syntactic processing. Recently, we have postulated that Broca's area might be involved in higher-order chunk processing during implicit learning of a motor sequence. Chunking is an information-processing mechanism that consists of grouping consecutive items in a sequence and is likely to be involved in all of the aforementioned cognitive processes. Demonstrating a contribution of Broca's area to chunking during the learning of a nonmotor sequence that does not involve language could shed new light on its function. To address this issue, we used offline MRI-guided TMS in healthy volunteers to disrupt the activity of either the posterior part of Broca's area (left Brodmann's area [BA] 44) or a control site just before participants learned a perceptual sequence structured in distinct hierarchical levels. We found that disruption of the left BA 44 increased the processing time of stimuli representing the boundaries of higher-order chunks and modified the chunking strategy. The current results highlight the possible role of the left BA 44 in building up effector-independent representations of higher-order events in structured sequences. This might clarify the contribution of Broca's area in processing hierarchical structures, a key mechanism in many cognitive functions, such as language and composite actions.

  4. Neural correlates of audiovisual integration of semantic category information.

    PubMed

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-04-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded during a words-categorization task with stimuli presented in the auditory-visual modality. In the experiment, congruency of the visual and auditory stimuli was manipulated. Results showed that within the window of about 180-210 ms post-stimulus more positive values were elicited by category-congruent audiovisual stimuli than category-incongruent audiovisual stimuli. This indicates that the late frontal-central audiovisual interaction is related to audiovisual integration of semantic category information. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex.

    PubMed

    van Atteveldt, Nienke M; Blau, Vera C; Blomert, Leo; Goebel, Rainer

    2010-02-02

    Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and

  6. Automated social skills training with audiovisual information.

    PubMed

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  7. Simulated and Virtual Science Laboratory Experiments: Improving Critical Thinking and Higher-Order Learning Skills

    NASA Astrophysics Data System (ADS)

    Simon, Nicole A.

    Virtual laboratory experiments using interactive computer simulations are not being employed as viable alternatives to laboratory science curriculum at extensive enough rates within higher education. Rote traditional lab experiments are currently the norm and are not addressing inquiry, Critical Thinking, and cognition throughout the laboratory experience, linking with educational technologies (Pyatt & Sims, 2007; 2011; Trundle & Bell, 2010). A causal-comparative quantitative study was conducted with 150 learners enrolled at a two-year community college, to determine the effects of simulation laboratory experiments on Higher-Order Learning, Critical Thinking Skills, and Cognitive Load. The treatment population used simulated experiments, while the non-treatment sections performed traditional expository experiments. A comparison was made using the Revised Two-Factor Study Process survey, Motivated Strategies for Learning Questionnaire, and the Scientific Attitude Inventory survey, using a Repeated Measures ANOVA test for treatment or non-treatment. A main effect of simulated laboratory experiments was found for both Higher-Order Learning, [F (1, 148) = 30.32,p = 0.00, eta2 = 0.12] and Critical Thinking Skills, [F (1, 148) = 14.64,p = 0.00, eta 2 = 0.17] such that simulations showed greater increases than traditional experiments. Post-lab treatment group self-reports indicated increased marginal means (+4.86) in Higher-Order Learning and Critical Thinking Skills, compared to the non-treatment group (+4.71). Simulations also improved the scientific skills and mastery of basic scientific subject matter. It is recommended that additional research recognize that learners' Critical Thinking Skills change due to different instructional methodologies that occur throughout a semester.

  8. Stuttering and speech naturalness: audio and audiovisual judgments.

    PubMed

    Martin, R R; Haroldson, S K

    1992-06-01

    Unsophisticated raters, using 9-point interval scales, judged speech naturalness and stuttering severity of recorded stutterer and nonstutterer speech samples. Raters judged separately the audio-only and audiovisual presentations of each sample. For speech naturalness judgments of stutterer samples, raters invariably judged the audiovisual presentation more unnatural than the audio presentation of the same sample; but for the nonstutterer samples, there was no difference between audio and audiovisual naturalness ratings. Stuttering severity ratings did not differ significantly between audio and audiovisual presentations of the same samples. Rater reliability, interrater agreement, and intrarater agreement for speech naturalness judgments were assessed.

  9. Attributes of Quality in Audiovisual Materials for Health Professionals.

    ERIC Educational Resources Information Center

    Suter, Emanuel; Waddell, Wendy H.

    1981-01-01

    Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)

  10. Attributes of Quality in Audiovisual Materials for Health Professionals.

    ERIC Educational Resources Information Center

    Suter, Emanuel; Waddell, Wendy H.

    1981-01-01

    Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)

  11. Cortical Integration of Audio-Visual Information

    PubMed Central

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  12. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention.

  13. Alignment of Learning Objectives and Assessments in Therapeutics Courses to Foster Higher-Order Thinking

    PubMed Central

    Hawboldt, John; Doyle, Daniel; Genge, Terri

    2015-01-01

    Objective. To determine whether national educational outcomes, course objectives, and classroom assessments for 2 therapeutics courses were aligned for curricular content and cognitive processes, and if they included higher-order thinking. Method. Document analysis and student focus groups were used. Outcomes, objectives, and assessment tasks were matched for specific therapeutics content and cognitive processes. Anderson and Krathwohl’s Taxonomy was used to define higher-order thinking. Students discussed whether assessments tested objectives and described their thinking when responding to assessments. Results. There were 7 outcomes, 31 objectives, and 412 assessment tasks. The alignment for content and cognitive processes was not satisfactory. Twelve students participated in the focus groups. Students thought more short-answer questions than multiple choice questions matched the objectives for content and required higher-order thinking. Conclusion. The alignment analysis provided data that could be used to reveal and strengthen the enacted curriculum and improve student learning. PMID:25741026

  14. Alignment of learning objectives and assessments in therapeutics courses to foster higher-order thinking.

    PubMed

    FitzPatrick, Beverly; Hawboldt, John; Doyle, Daniel; Genge, Terri

    2015-02-17

    To determine whether national educational outcomes, course objectives, and classroom assessments for 2 therapeutics courses were aligned for curricular content and cognitive processes, and if they included higher-order thinking. Document analysis and student focus groups were used. Outcomes, objectives, and assessment tasks were matched for specific therapeutics content and cognitive processes. Anderson and Krathwohl's Taxonomy was used to define higher-order thinking. Students discussed whether assessments tested objectives and described their thinking when responding to assessments. There were 7 outcomes, 31 objectives, and 412 assessment tasks. The alignment for content and cognitive processes was not satisfactory. Twelve students participated in the focus groups. Students thought more short-answer questions than multiple choice questions matched the objectives for content and required higher-order thinking. The alignment analysis provided data that could be used to reveal and strengthen the enacted curriculum and improve student learning.

  15. Cerebellar contribution to higher and lower order rule learning and cognitive flexibility in mice.

    PubMed

    Dickson, P E; Cairns, J; Goldowitz, D; Mittleman, G

    2017-03-14

    Cognitive flexibility has traditionally been considered a frontal lobe function. However, converging evidence suggests involvement of a larger brain circuit which includes the cerebellum. Reciprocal pathways connecting the cerebellum to the prefrontal cortex provide a biological substrate through which the cerebellum may modulate higher cognitive functions, and it has been observed that cognitive inflexibility and cerebellar pathology co-occur in psychiatric disorders (e.g., autism, schizophrenia, addiction). However, the degree to which the cerebellum contributes to distinct forms of cognitive flexibility and rule learning is unknown. We tested lurcher↔wildtype aggregation chimeras which lose 0-100% of cerebellar Purkinje cells during development on a touchscreen-mediated attentional set-shifting task to assess the contribution of the cerebellum to higher and lower order rule learning and cognitive flexibility. Purkinje cells, the sole output of the cerebellar cortex, ranged from 0 to 108,390 in tested mice. Reversal learning and extradimensional set-shifting were impaired in mice with⩾95% Purkinje cell loss. Cognitive deficits were unrelated to motor deficits in ataxic mice. Acquisition of a simple visual discrimination and an attentional-set were unrelated to Purkinje cells. A positive relationship was observed between Purkinje cells and errors when exemplars from a novel, non-relevant dimension were introduced. Collectively, these data suggest that the cerebellum contributes to higher order cognitive flexibility, lower order cognitive flexibility, and attention to novel stimuli, but not the acquisition of higher and lower order rules. These data indicate that the cerebellar pathology observed in psychiatric disorders may underlie deficits involving cognitive flexibility and attention to novel stimuli.

  16. Word sense disambiguation via high order of learning in complex networks

    NASA Astrophysics Data System (ADS)

    Silva, Thiago C.; Amancio, Diego R.

    2012-06-01

    Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model.

  17. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-10-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  18. Assessment of the learning curve from the California Verbal Learning Test-Children's Version with the first-order system transfer function.

    PubMed

    Stepanov, Igor I; Abramson, Charles I; Warschausky, Seth

    2011-01-01

    A mathematical model is proposed to measure the learning curve in the California Verbal Learning Test-Children's Version. The model is based on the first-order system transfer function in the form Y = B3*exp[-B2*(X-1)]+B4*{1-exp[-B2*(X-1)]}, where X is the trial number, Y is the number of recalled correct words, B2 is the learning rate, B3 is interpreted as readiness to learn and B4 as the ability to learn. Children's readiness to learn and ability to learn were lower than adults. Modeling revealed that girls had greater readiness to learn and ability to learn than boys.

  19. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed Central

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  20. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed

    Lerner, Itamar; Armstrong, Blair C; Frost, Ram

    2014-11-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition.

  1. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals.

  2. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  3. A Step Into Service Learning Is A Step Into Higher Order Thinking

    NASA Astrophysics Data System (ADS)

    O'Connell, S.

    2010-12-01

    Students, especially beginning college students often consider science courses to be remembering and regurgitating, not creative and of little social relevance. As scientists we know this isn’t true. How do we counteract this sentiment among students? Incorporating service learning, probably better-called project learning into our class is one way. As one “non-science” student, who was taking two science service-learning courses said, “If it’s a service-learning course you know it’s going to be interesting.” Service learning means that some learning takes place in the community. The community component increases understanding of the material being studied, promotes higher order thinking, and provides a benefit for someone else. Students have confirmed that the experience shows them that their knowledge is need by the community and for some, reinforces their commitment to continued civic engagement. I’ll give three examples with the community activity growing in importance in the course and in the community, a single exercise, a small project, and a focus of the class. All of the activities use reflective writing to increase analysis and synthesis. An example of a single exercise could be participating in an event related to your course, for example, a zoning board meeting, or a trip to a wastewater treatment plant. Preparation for the trip should include reading. After the event students synthesize and analyze the activity through a series of questions emphasizing reflection. A two to four class assignment might include expanding the single-day activity or students familiarizing themselves with a course topic, interviewing a person, preparing a podcast of the interview and reflecting upon the experience. The most comprehensive approach is where the class focus is on a community project, e.g. Tim Ku’s geochemistry course (this session). Another class that lends itself easily to a comprehensive service learning approach is Geographic Information

  4. Strategy-effects in prefrontal cortex during learning of higher-order S-R rules.

    PubMed

    Wolfensteller, Uta; von Cramon, D Yves

    2011-07-15

    All of us regularly face situations that require the integration of the available information at hand with the established rules that guide behavior in order to generate the most appropriate action. But where individuals differ from one another is most certainly in terms of the different strategies that are adopted during this process. A previous study revealed differential brain activation patterns for the implementation of well established higher-order stimulus-response (S-R) rules depending on inter-individual strategy differences (Wolfensteller and von Cramon, 2010). This raises the question of how these strategies evolve or which neurocognitive mechanisms underlie these inter-individual strategy differences. Using functional magnetic resonance imaging (fMRI), the present study revealed striking strategy-effects across regions of the lateral prefrontal cortex during the implementation of higher-order S-R rules at an early stage of learning. The left rostrolateral prefrontal cortex displayed a quantitative strategy-effect, such that activation during rule integration based on a mismatch was related to the degree to which participants continued to rely on rule integration. A quantitative strategy ceiling effect was observed for the left inferior frontal junction area. Conversely, the right inferior frontal gyrus displayed a qualitative strategy-effect such that participants who at a later point relied on an item-based strategy showed stronger activations in this region compared to those who continued with the rule integration strategy. Together, the present findings suggest that a certain amount of rule integration is mandatory when participants start to learn higher-order rules. The more efficient item-based strategy that evolves later appears to initially require the recruitment of additional cognitive resources in order to shield the currently relevant S-R association from interfering information.

  5. Interpolation-based reduced-order modelling for steady transonic flows via manifold learning

    NASA Astrophysics Data System (ADS)

    Franz, T.; Zimmermann, R.; Görtz, S.; Karcher, N.

    2014-03-01

    This paper presents a parametric reduced-order model (ROM) based on manifold learning (ML) for use in steady transonic aerodynamic applications. The main objective of this work is to derive an efficient ROM that exploits the low-dimensional nonlinear solution manifold to ensure an improved treatment of the nonlinearities involved in varying the inflow conditions to obtain an accurate prediction of shocks. The reduced-order representation of the data is derived using the Isomap ML method, which is applied to a set of sampled computational fluid dynamics (CFD) data. In order to develop a ROM that has the ability to predict approximate CFD solutions at untried parameter combinations, Isomap is coupled with an interpolation method to capture the variations in parameters like the angle of attack or the Mach number. Furthermore, an approximate local inverse mapping from the reduced-order representation to the full CFD solution space is introduced. The proposed ROM, called Isomap+I, is applied to the two-dimensional NACA 64A010 airfoil and to the 3D LANN wing. The results are compared to those obtained by proper orthogonal decomposition plus interpolation (POD+I) and to the full-order CFD model.

  6. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  7. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  8. The Audio-Visual Marketing Handbook for Independent Schools.

    ERIC Educational Resources Information Center

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  9. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  10. Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli.

    PubMed

    Vroomen, Jean; Stekelenburg, Jeroen J

    2010-07-01

    The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal occurrence of this audiovisual event was made predictable by two moving disks that touched the rectangle. Local autoregressive average source estimation indicated that this audiovisual interaction may be related to integrative processing in auditory areas. When the moving disks did not precede the audiovisual stimulus--making the onset unpredictable--there was no N1 reduction. In Experiment 2, the predictability of the leading visual signal was manipulated by introducing a temporal asynchrony between the audiovisual event and the collision of moving disks. Audiovisual events occurred either at the moment, before (too "early"), or after (too "late") the disks collided on the rectangle. When asynchronies varied from trial to trial--rendering the moving disks unreliable temporal predictors of the audiovisual event--the N1 reduction was abolished. These results demonstrate that the N1 suppression is induced by visual information that both precedes and reliably predicts audiovisual onset, without a necessary link to human action-related neural mechanisms.

  11. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  12. USING AUDIO-VISUAL MATERIALS IN THE ELEMENTARY CLASSROOM.

    ERIC Educational Resources Information Center

    FOSTER, I. OWEN

    THIS BULLETIN WAS PREPARED TO PROMOTE THE EFFECTIVE USE OF AUDIOVISUAL MATERIALS IN ELEMENTARY SCHOOLS. CHAPTERS ARE GROUPED UNDER THE HEADINGS, PHILOSOPHY, RESOURCE UNITS, AUDIOVISUAL MATERIALS IN SUBJECT AREAS, AND EFFECTIVENESS. SUBJECT AREAS DISCUSSED ARE SOCIAL STUDIES, SCIENCE, LANGUAGE ARTS, ARITHMETIC, HEALTH AND SAFETY, AND ART AND MUSIC.…

  13. Audiovisual Media and the Disabled. AV in Action 1.

    ERIC Educational Resources Information Center

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2)…

  14. Catalogs of Audiovisual Materials: A Guide to Government Sources.

    ERIC Educational Resources Information Center

    Dale, Doris Cruger

    This annotated bibliography lists 53 federally published catalogs and bibliographies which identify films and other audiovisual materials produced or sponsored by government agencies; some also include commercially produced audiovisual and/or print materials. Publications are listed alphabetically by government agency or department, and…

  15. Audiovisual Media and the Disabled. AV in Action 1.

    ERIC Educational Resources Information Center

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2)…

  16. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  17. Neural Correlates of Audiovisual Integration of Semantic Category Information

    ERIC Educational Resources Information Center

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  18. The Audio-Visual Marketing Handbook for Independent Schools.

    ERIC Educational Resources Information Center

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  19. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  20. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  1. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  2. Knowledge Generated by Audiovisual Narrative Action Research Loops

    ERIC Educational Resources Information Center

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  3. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  4. Guidelines for Audiovisual and Multimedia Materials in Libraries and Other Institutions. Audiovisual and Multimedia Section

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions (NJ1), 2004

    2004-01-01

    This set of guidelines, for audiovisual and multimedia materials in libraries of all kinds and other appropriate institutions, is the product of many years of consultation and collaborative effort. As early as 1972, The UNESCO (United Nations Educational, Scientific and Cultural Organization) Public Library Manifesto had stressed the need for…

  5. Multistage audiovisual integration of speech: dissociating identification and detection.

    PubMed

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  6. The impact of constructivist teaching strategies on the acquisition of higher order cognition and learning

    NASA Astrophysics Data System (ADS)

    Merrill, Alison Saricks

    The purpose of this quasi-experimental quantitative mixed design study was to compare the effectiveness of brain-based teaching strategies versus a traditional lecture format in the acquisition of higher order cognition as determined by test scores. A second purpose was to elicit student feedback about the two teaching approaches. The design was a 2 x 2 x 2 factorial design study with repeated measures on the last factor. The independent variables were type of student, teaching method, and a within group change over time. Dependent variables were a between group comparison of pre-test, post-test gain scores and a within and between group comparison of course examination scores. A convenience sample of students enrolled in medical-surgical nursing was used. One group (n=36) was made up of traditional students and the other group (n=36) consisted of second-degree students. Four learning units were included in this study. Pre- and post-tests were given on the first two units. Course examinations scores from all four units were compared. In one cohort two of the units were taught via lecture format and two using constructivist activities. These methods were reversed for the other cohort. The conceptual basis for this study derives from neuroscience and cognitive psychology. Learning is defined as the growth of new dendrites. Cognitive psychologists view learning as a constructive activity in which new knowledge is built on an internal foundation of existing knowledge. Constructivist teaching strategies are designed to stimulate the brain's natural learning ability. There was a statistically significant difference based on type of teaching strategy (t = -2.078, df = 270, p = .039, d = .25)) with higher mean scores on the examinations covering brain-based learning units. There was no statistical significance based on type of student. Qualitative data collection was conducted in an on-line forum at the end of the semester. Students had overall positive responses about the

  7. Lexical Learning in Bilingual Adults: The Relative Importance of Short-Term Memory for Serial Order and Phonological Knowledge

    ERIC Educational Resources Information Center

    Majerus, Steve; Poncelet, Martine; Van der Linden, Martial; Weekes, Brendan S.

    2008-01-01

    Studies of monolingual speakers have shown a strong association between lexical learning and short-term memory (STM) capacity, especially STM for serial order information. At the same time, studies of bilingual speakers suggest that phonological knowledge is the main factor that drives lexical learning. This study tested these two hypotheses…

  8. Learning to Order Words: A Connectionist Model of Heavy NP Shift and Accessibility Effects in Japanese and English

    ERIC Educational Resources Information Center

    Chang, Franklin

    2009-01-01

    Languages differ from one another and must therefore be learned. Processing biases in word order can also differ across languages. For example, heavy noun phrases tend to be shifted to late sentence positions in English, but to early positions in Japanese. Although these language differences suggest a role for learning, most accounts of these…

  9. The Natural Statistics of Audiovisual Speech

    PubMed Central

    Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A.

    2009-01-01

    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver. PMID:19609344

  10. The natural statistics of audiovisual speech.

    PubMed

    Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A

    2009-07-01

    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  11. The Use of Audio-Visual Aids in Teaching: A Study in the Saudi Girls Colleges.

    ERIC Educational Resources Information Center

    Al-Sharhan, Jamal A.

    1993-01-01

    A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…

  12. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  13. Infant perception of audio-visual speech synchrony in familiar and unfamiliar fluent speech.

    PubMed

    Pons, Ferran; Lewkowicz, David J

    2014-06-01

    We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.

  14. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    PubMed

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  15. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Audiovisual asynchrony detection and speech intelligibility in noise with moderate to severe sensorineural hearing impairment.

    PubMed

    Başkent, Deniz; Bazo, Danny

    2011-01-01

    The objective of this study is to explore the sensitivity to intermodal asynchrony in audiovisual speech with moderate to severe sensorineural hearing loss. Based on previous studies, two opposing expectations were an increase in sensitivity, as hearing-impaired listeners heavily rely on lipreading in daily life, and a reduction in sensitivity, as hearing-impaired listeners tend to be elderly and advanced age could potentially impair audiovisual integration. Adults with normal (N = 11, ages between 23 and 50 yrs) and impaired hearing (N = 11, ages between 54 and 81 yrs, the pure-tone average between 42 and 67 dB HL) participated in two experiments. In the first experiment, the synchrony judgments were recorded for varying intermodal time differences in audiovisual sentence recordings. In the second experiment, the intelligibility of audiovisual and audio-only speech was measured in speech-shaped noise, and correlations were explored between the synchrony window and intelligibility scores for individual listeners. Similar to previous studies, a sensitivity window on the order of a few hundred milliseconds was observed with all listeners. The average window shapes did not differ between normal-hearing and hearing-impaired groups; however, there was large individual variability. Individual windows were quantified by Gaussian curve fitting. Point of subjective simultaneity, a measure of window peak shift from the actual synchrony point, and full-width at half-maximum, a measure of window duration, were not correlated with participant's age or the degree of hearing loss. Points of subjective simultaneity were also not correlated with speech intelligibility scores. A moderate negative correlation that was significant at most conditions was observed between the full-width at half-maximum values and intelligibility scores. Contrary to either expectation per se, there was no indication of an effect of hearing impairment or age on the sensitivity to intermodal asynchrony in

  17. Dissociating verbal and nonverbal audiovisual object processing

    PubMed Central

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled. PMID:19101025

  18. Alterations in audiovisual simultaneity perception in amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  19. Alterations in audiovisual simultaneity perception in amblyopia

    PubMed Central

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony. PMID:28598996

  20. Categorization of Natural Dynamic Audiovisual Scenes

    PubMed Central

    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville

    2014-01-01

    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database. PMID:24788808

  1. Going Beyond a Mean-field Model for the Learning Cortex: Second-Order Statistics

    PubMed Central

    Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Sleigh, J. W.

    2008-01-01

    Mean-field models of the cortex have been used successfully to interpret the origin of features on the electroencephalogram under situations such as sleep, anesthesia, and seizures. In a mean-field scheme, dynamic changes in synaptic weights can be considered through fluctuation-based Hebbian learning rules. However, because such implementations deal with population-averaged properties, they are not well suited to memory and learning applications where individual synaptic weights can be important. We demonstrate that, through an extended system of equations, the mean-field models can be developed further to look at higher-order statistics, in particular, the distribution of synaptic weights within a cortical column. This allows us to make some general conclusions on memory through a mean-field scheme. Specifically, we expect large changes in the standard deviation of the distribution of synaptic weights when fluctuation in the mean soma potentials are large, such as during the transitions between the “up” and “down” states of slow-wave sleep. Moreover, a cortex that has low structure in its neuronal connections is most likely to decrease its standard deviation in the weights of excitatory to excitatory synapses, relative to the square of the mean, whereas a cortex with strongly patterned connections is most likely to increase this measure. This suggests that fluctuations are used to condense the coding of strong (presumably useful) memories into fewer, but dynamic, neuron connections, while at the same time removing weaker (less useful) memories. PMID:19669541

  2. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  3. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  4. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  5. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  6. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    PubMed Central

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  7. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.

  8. GRAPE - GIS Repetition Using Audio-Visual Repetition Units and its Leanring Effectiveness

    NASA Astrophysics Data System (ADS)

    Niederhuber, M.; Brugger, S.

    2011-09-01

    A new audio-visual learning medium has been developed at the Department of Environmental Sciences at ETH Zurich (Switzerland), for use in geographical information sciences (GIS) courses. This new medium, presented in the form of Repetition Units, allows students to review and consolidate the most important learning concepts on an individual basis. The new material consists of: a) a short enhanced podcast (recorded and spoken slide show) with a maximum duration of 5 minutes, which focuses on only one important aspect of a lecture's theme; b) one or two relevant exercises, covering different cognitive levels of learning, with a maximum duration of 10 minutes; and c), solutions for the exercises. During a pilot phase in 2010, six Repetition Units were produced by the lecturers. Twenty more Repetition Units will be produced by our students during the fall semester of 2011 and 2012. The project is accompanied by a 5-year study (2009 - 2013) that investigates learning success using the new material, focussing on the question, whether or not the new material help to consolidate and refresh basic GIS knowledge. It will be analysed based on longitudinal studies. Initial results indicate that the new medium helps to refresh knowledge as the test groups scored higher than the control group. These results are encouraging and suggest that the new material with its combination of short audio-visual podcasts and relevant exercises help to consolidate students' knowledge.

  9. Audiovisual Interval Size Estimation Is Associated with Early Musical Training

    PubMed Central

    Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134

  10. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    PubMed

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  11. Modeling Stock Order Flows and Learning Market-Making from Data

    DTIC Science & Technology

    2002-06-01

    and demand. In this paper, we demonstrate a novel method for modeling the market as a dynamic system and a reinforcement learning algorithm that learns...difficult dynamic system. Our reinforcement learning algorithm, based on likelihood ratios, is run on this partially-observable environment. We demonstrate learning results for two separate real stocks.

  12. Community Organizational Learning: Case Studies Illustrating a Three-Dimensional Model of Levels and Orders of Change

    ERIC Educational Resources Information Center

    Perkins, Douglas D.; Bess, Kimberly D.; Cooper, Daniel G.; Jones, Diana L.; Armstead, Theresa; Speer, Paul W.

    2007-01-01

    We present a three-dimensional cube framework to help community organizational researchers and administrators think about an organization's learning and empowerment-related structures and processes in terms of first-order (incremental or ameliorative) and second-order (transformative) change at the individual, organizational, and community levels.…

  13. Effects of Higher-Order Cognitive Strategy Training on Gist-Reasoning and Fact-Learning in Adolescents

    PubMed Central

    Gamino, Jacquelyn F.; Chapman, Sandra B.; Hull, Elizabeth L.; Lyon, G. Reid

    2010-01-01

    Improving the reasoning skills of adolescents across the United States has become a major concern for educators and scientists who are dedicated to identifying evidence-based protocols to improve student outcome. This small sample randomized, control pilot study sought to determine the efficacy of higher-order cognitive training on gist-reasoning and fact-learning in an inner-city public middle school. The study compared gist-reasoning and fact-learning performances after training in a smaller sample when tested in Spanish, many of the students’ native language, versus English. The 54 eighth grade students who participated in this pilot study were enroled in an urban middle school, predominantly from lower socio-economic status families, and were primarily of minority descent. The students were randomized into one of three groups, one that learned cognitive strategies promoting abstraction of meaning, a group that learned rote memory strategies, or a control group to ascertain the impact of each program on gist-reasoning and fact-learning from text-based information. We found that the students who had cognitive strategy instruction that entailed abstraction of meaning significantly improved their gist-reasoning and fact-learning ability. The students who learned rote memory strategies significantly improved their fact-learning scores from a text but not gist-reasoning ability. The control group showed no significant change in either gist-reasoning or fact-learning ability. A trend toward significant improvement in overall reading scores for the group that learned to abstract meaning as well as a significant correlation between gist-reasoning ability and the critical thinking on a state-mandated standardized reading test was also found. There were no significant differences between English and Spanish performance of gist-reasoning and fact-learning. Our findings suggest that teaching higher-order cognitive strategies facilitates gist-reasoning ability and student

  14. Effects of higher-order cognitive strategy training on gist-reasoning and fact-learning in adolescents.

    PubMed

    Gamino, Jacquelyn F; Chapman, Sandra B; Hull, Elizabeth L; Lyon, G Reid

    2010-01-01

    Improving the reasoning skills of adolescents across the United States has become a major concern for educators and scientists who are dedicated to identifying evidence-based protocols to improve student outcome. This small sample randomized, control pilot study sought to determine the efficacy of higher-order cognitive training on gist-reasoning and fact-learning in an inner-city public middle school. The study compared gist-reasoning and fact-learning performances after training in a smaller sample when tested in Spanish, many of the students' native language, versus English. The 54 eighth grade students who participated in this pilot study were enroled in an urban middle school, predominantly from lower socio-economic status families, and were primarily of minority descent. The students were randomized into one of three groups, one that learned cognitive strategies promoting abstraction of meaning, a group that learned rote memory strategies, or a control group to ascertain the impact of each program on gist-reasoning and fact-learning from text-based information. We found that the students who had cognitive strategy instruction that entailed abstraction of meaning significantly improved their gist-reasoning and fact-learning ability. The students who learned rote memory strategies significantly improved their fact-learning scores from a text but not gist-reasoning ability. The control group showed no significant change in either gist-reasoning or fact-learning ability. A trend toward significant improvement in overall reading scores for the group that learned to abstract meaning as well as a significant correlation between gist-reasoning ability and the critical thinking on a state-mandated standardized reading test was also found. There were no significant differences between English and Spanish performance of gist-reasoning and fact-learning. Our findings suggest that teaching higher-order cognitive strategies facilitates gist-reasoning ability and student

  15. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  16. Sight and sound out of synch: fragmentation and renormalisation of audiovisual integration and subjective timing.

    PubMed

    Freeman, Elliot D; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon

    2013-01-01

    The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream-Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing

  17. National Conference on the Use of Audiovisuals in Medical Education, Proceedings (University of Alabama Medical Center, Birmingham, August 6-8, 1969).

    ERIC Educational Resources Information Center

    Alabama Univ., Birmingham. Medical Center.

    The 39 medical educators attended a 2-day conference to resolve some of the disparity which exists in the knowledge and utilization of audiovisual aids and to define the role of learning resource centers. Major presentations were: (1) "The Continuing Confusion in Communications" by J.F. Wolker, (2) "Visual Systems: Pro and Con" by R.S. Craig, (3)…

  18. The Effects of Audio-Visual Recorded and Audio Recorded Listening Tasks on the Accuracy of Iranian EFL Learners' Oral Production

    ERIC Educational Resources Information Center

    Drood, Pooya; Asl, Hanieh Davatgari

    2016-01-01

    The ways in which task in classrooms has developed and proceeded have receive great attention in the field of language teaching and learning in the sense that they draw attention of learners to the competing features such as accuracy, fluency, and complexity. English audiovisual and audio recorded materials have been widely used by teachers and…

  19. Teleconferences and Audiovisual Materials in Earth Science Education

    NASA Astrophysics Data System (ADS)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  20. The Effect of Number and Presentation Order of High-Constraint Sentences on Second Language Word Learning.

    PubMed

    Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo

    2016-01-01

    This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input.

  1. The Effect of Number and Presentation Order of High-Constraint Sentences on Second Language Word Learning

    PubMed Central

    Ma, Tengfei; Chen, Ran; Dunlap, Susan; Chen, Baoguo

    2016-01-01

    This paper presents the results of an experiment that investigated the effects of number and presentation order of high-constraint sentences on semantic processing of unknown second language (L2) words (pseudowords) through reading. All participants were Chinese native speakers who learned English as a foreign language. In the experiment, sentence constraint and order of different constraint sentences were manipulated in English sentences, as well as L2 proficiency level of participants. We found that the number of high-constraint sentences was supportive for L2 word learning except in the condition in which high-constraint exposure was presented first. Moreover, when the number of high-constraint sentences was the same, learning was significantly better when the first exposure was a high-constraint exposure. And no proficiency level effects were found. Our results provided direct evidence that L2 word learning benefited from high quality language input and first presentations of high quality language input. PMID:27695432

  2. The Audio-Visual Services in Canada and the United States. Comparative Study on the Administration of Audio-Visual Services in Advanced and Developing Countries. Part 3.

    ERIC Educational Resources Information Center

    Hyer, Anna L.

    As the third of a three-part comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded report describes the educational systems of the United States and Canada, the audiovisual services at the local and state/provincial level, and the national audiovisual support services. Also…

  3. An audiovisual database of English speech sounds

    NASA Astrophysics Data System (ADS)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  4. When audiovisual correspondence disturbs visual processing.

    PubMed

    Hong, Sang Wook; Shim, Won Mok

    2016-05-01

    Multisensory integration is known to create a more robust and reliable perceptual representation of one's environment. Specifically, a congruent auditory input can make a visual stimulus more salient, consequently enhancing the visibility and detection of the visual target. However, it remains largely unknown whether a congruent auditory input can also impair visual processing. In the current study, we demonstrate that temporally congruent auditory input disrupts visual processing, consequently slowing down visual target detection. More importantly, this cross-modal inhibition occurs only when the contrast of visual targets is high. When the contrast of visual targets is low, enhancement of visual target detection is observed, consistent with the prediction based on the principle of inverse effectiveness (PIE) in cross-modal integration. The switch of the behavioral effect of audiovisual interaction from benefit to cost further extends the PIE to encompass the suppressive cross-modal interaction.

  5. Audio-visual recording in the surgery: do patients mind?

    PubMed Central

    Campbell, I. K.

    1982-01-01

    The results of questionnaires completed by 145 patients following audio-visual recording of their consultations are analysed. It is concluded that the technique is well accepted and non-intrusive. ImagesFigure 1.Figure 2. PMID:7143315

  6. Facilitating Second-Order Learning:...Speaking with Farmers in Scotland.

    ERIC Educational Resources Information Center

    Leach, G.; Leeuwis, C.

    1997-01-01

    Observations of and discussions with four Scottish farmers showed that learning processes were more effective than classic planning processes in dealing with change in farm management. Their stages of learning resembled stages of the strategic planning process. (SK)

  7. Facilitating Second-Order Learning:...Speaking with Farmers in Scotland.

    ERIC Educational Resources Information Center

    Leach, G.; Leeuwis, C.

    1997-01-01

    Observations of and discussions with four Scottish farmers showed that learning processes were more effective than classic planning processes in dealing with change in farm management. Their stages of learning resembled stages of the strategic planning process. (SK)

  8. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  9. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  10. Audio-visual assistance in co-creating transition knowledge

    NASA Astrophysics Data System (ADS)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  11. The Role of Visual Learning in Improving Students' High-Order Thinking Skills

    ERIC Educational Resources Information Center

    Raiyn, Jamal

    2016-01-01

    Various concepts have been introduced to improve students' analytical thinking skills based on problem based learning (PBL). This paper introduces a new concept to increase student's analytical thinking skills based on a visual learning strategy. Such a strategy has three fundamental components: a teacher, a student, and a learning process. The…

  12. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  13. The Development of Learning Model Based on Problem Solving to Construct High-Order Thinking Skill on the Learning Mathematics of 11th Grade in SMA/MA

    ERIC Educational Resources Information Center

    Syahputra, Edi; Surya, Edy

    2017-01-01

    This paper is a summary study of team Postgraduate on 11th grade. The objective of this study is to develop a learning model based on problem solving which can construct high-order thinking on the learning mathematics in SMA/MA. The subject of dissemination consists of Students of 11th grade in SMA/MA in 3 kabupaten/kota in North Sumatera, namely:…

  14. Elevated audiovisual temporal interaction in patients with migraine without aura.

    PubMed

    Yang, Weiping; Chu, Bingqian; Yang, Jiajia; Yu, Yinghua; Wu, Jinglong; Yu, Shengyuan

    2014-06-24

    Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine.

  15. Elevated audiovisual temporal interaction in patients with migraine without aura

    PubMed Central

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  16. Engineering the path to higher-order thinking in elementary education: A problem-based learning approach for STEM integration

    NASA Astrophysics Data System (ADS)

    Rehmat, Abeera Parvaiz

    As we progress into the 21st century, higher-order thinking skills and achievement in science and math are essential to meet the educational requirement of STEM careers. Educators need to think of innovative ways to engage and prepare students for current and future challenges while cultivating an interest among students in STEM disciplines. An instructional pedagogy that can capture students' attention, support interdisciplinary STEM practices, and foster higher-order thinking skills is problem-based learning. Problem-based learning embedded in the social constructivist view of teaching and learning (Savery & Duffy, 1995) promotes self-regulated learning that is enhanced through exploration, cooperative social activity, and discourse (Fosnot, 1996). This quasi-experimental mixed methods study was conducted with 98 fourth grade students. The study utilized STEM content assessments, a standardized critical thinking test, STEM attitude survey, PBL questionnaire, and field notes from classroom observations to investigate the impact of problem-based learning on students' content knowledge, critical thinking, and their attitude towards STEM. Subsequently, it explored students' experiences of STEM integration in a PBL environment. The quantitative results revealed a significant difference between groups in regards to their content knowledge, critical thinking skills, and STEM attitude. From the qualitative results, three themes emerged: learning approaches, increased interaction, and design and engineering implementation. From the overall data set, students described the PBL environment to be highly interactive that prompted them to employ multiple approaches, including design and engineering to solve the problem.

  17. Effects of the audiovisual conflict on auditory early processes.

    PubMed

    Scannella, Sébastien; Causse, Mickaël; Chauveau, Nicolas; Pastor, Josette; Dehais, Frédéric

    2013-07-01

    Auditory alarm misperception is one of the critical events that lead aircraft pilots to an erroneous flying decision. The rarity of these alarms associated with their possible unreliability may play a role in this misperception. In order to investigate this hypothesis, we manipulated both audiovisual conflict and sound rarity in a simplified landing task. Behavioral data and event related potentials (ERPs) of thirteen healthy participants were analyzed. We found that the presentation of a rare auditory signal (i.e., an alarm), incongruent with visual information, led to a smaller amplitude of the auditory N100 (i.e., less negative) compared to the condition in which both signals were congruent. Moreover, the incongruity between the visual information and the rare sound did not significantly affect reaction times, suggesting that the rare sound was neglected. We propose that the lower N100 amplitude reflects an early visual-to-auditory gating that depends on the rarity of the sound. In complex aircraft environments, this early effect might be partly responsible for auditory alarm insensitivity. Our results provide a new basis for future aeronautic studies and the development of countermeasures. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. The Application of Problem-Based Learning Strategy to Increase High Order Thinking Skills of Senior Vocational School Students

    ERIC Educational Resources Information Center

    Suprapto, Edy; Fahrizal; Priyono; Basri, K.

    2017-01-01

    This research is to apply and develop a strategy of problem-based learning to increase the ability of higher order thinking skills of senior vocational schools students. The research was done due to a fact that the quality of outputs of the senior vocational schools has not met the competency needed by the stakeholders in the field, that has made…

  19. Website Analysis as a Tool for Task-Based Language Learning and Higher Order Thinking in an EFL Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo

    2014-01-01

    Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…

  20. The Impact of Learning Driven Constructs on the Perceived Higher Order Cognitive Skills Improvement: Multimedia vs. Text

    ERIC Educational Resources Information Center

    Bagarukayo, Emily; Weide, Theo; Mbarika, Victor; Kim, Min

    2012-01-01

    The study aims at determining the impact of learning driven constructs on Perceived Higher Order Cognitive Skills (HOCS) improvement when using multimedia and text materials. Perceived HOCS improvement is the attainment of HOCS based on the students' perceptions. The research experiment undertaken using a case study was conducted on 223 students…

  1. Website Analysis as a Tool for Task-Based Language Learning and Higher Order Thinking in an EFL Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo

    2014-01-01

    Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…

  2. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: a qualitative study

    PubMed Central

    2013-01-01

    Background Computerized Provider Order Entry (CPOE) can improve patient safety, quality and efficiency, but hospitals face a host of barriers to adopting CPOE, ranging from resistance among physicians to the cost of the systems. In response to the incentives for meaningful use of health information technology and other market forces, hospitals in the United States are increasingly moving toward the adoption of CPOE. The purpose of this study was to characterize the experiences of hospitals that have successfully implemented CPOE. Methods We used a qualitative approach to observe clinical activities and capture the experiences of physicians, nurses, pharmacists and administrators at five community hospitals in Massachusetts (USA) that adopted CPOE in the past few years. We conducted formal, structured observations of care processes in diverse inpatient settings within each of the hospitals and completed in-depth, semi-structured interviews with clinicians and staff by telephone. After transcribing the audiorecorded interviews, we analyzed the content of the transcripts iteratively, guided by principles of the Immersion and Crystallization analytic approach. Our objective was to identify attitudes, behaviors and experiences that would constitute useful lessons for other hospitals embarking on CPOE implementation. Results Analysis of observations and interviews resulted in findings about the CPOE implementation process in five domains: governance, preparation, support, perceptions and consequences. Successful institutions implemented clear organizational decision-making mechanisms that involved clinicians (governance). They anticipated the need for education and training of a wide range of users (preparation). These hospitals deployed ample human resources for live, in-person training and support during implementation. Successful implementation hinged on the ability of clinical leaders to address and manage perceptions and the fear of change. Implementation proceeded

  3. Lessons learned from implementation of computerized provider order entry in 5 community hospitals: a qualitative study.

    PubMed

    Simon, Steven R; Keohane, Carol A; Amato, Mary; Coffey, Michael; Cadet, Bismarck; Zimlichman, Eyal; Bates, David W

    2013-06-24

    Computerized Provider Order Entry (CPOE) can improve patient safety, quality and efficiency, but hospitals face a host of barriers to adopting CPOE, ranging from resistance among physicians to the cost of the systems. In response to the incentives for meaningful use of health information technology and other market forces, hospitals in the United States are increasingly moving toward the adoption of CPOE. The purpose of this study was to characterize the experiences of hospitals that have successfully implemented CPOE. We used a qualitative approach to observe clinical activities and capture the experiences of physicians, nurses, pharmacists and administrators at five community hospitals in Massachusetts (USA) that adopted CPOE in the past few years. We conducted formal, structured observations of care processes in diverse inpatient settings within each of the hospitals and completed in-depth, semi-structured interviews with clinicians and staff by telephone. After transcribing the audiorecorded interviews, we analyzed the content of the transcripts iteratively, guided by principles of the Immersion and Crystallization analytic approach. Our objective was to identify attitudes, behaviors and experiences that would constitute useful lessons for other hospitals embarking on CPOE implementation. Analysis of observations and interviews resulted in findings about the CPOE implementation process in five domains: governance, preparation, support, perceptions and consequences. Successful institutions implemented clear organizational decision-making mechanisms that involved clinicians (governance). They anticipated the need for education and training of a wide range of users (preparation). These hospitals deployed ample human resources for live, in-person training and support during implementation. Successful implementation hinged on the ability of clinical leaders to address and manage perceptions and the fear of change. Implementation proceeded smoothly when institutions

  4. The role of order of practice in learning to handle an upper-limb prosthesis.

    PubMed

    Bouwsema, Hanneke; van der Sluis, Corry K; Bongers, Raoul M

    2008-09-01

    To determine which order of presentation of practice tasks had the highest effect on using an upper-limb prosthetic simulator. A cohort analytic study. University laboratory. Healthy, able-bodied participants (N=72) randomly assigned to 1 of 8 groups, each composed of 9 men and 9 women. Participants (n=36) used a myoelectric simulator, and participants (n=36) used a body-powered simulator. On day 1, participants performed 3 tasks in the acquisition phase. On day 2, participants performed a retention test and a transfer test. For each simulator, there were 4 groups of participants: group 1 practiced random and was tested random, group 2 practiced random and was tested blocked, group 3 practiced blocked and was tested random, and group 4 practiced blocked and was tested blocked. Initiation time, the time from the starting signal until the beginning of the movement, and movement time, the time from the beginning until the end of the movement. Movement times got faster during acquisition (P<.001). The blocked group had faster movement times (P=.009), and learning in this group extended over the complete acquisition phase (P<.001). However, this advantage disappeared in the retention and transfer tests. Compared with a myoelectric simulator, movements with the body-powered simulator were faster in acquisition (P=.004) and transfer test (P=.034). Performance in daily life with a prosthesis is indifferent to the structure in which the training is set up. However, practicing in a blocked fashion leads to faster performance; in novice trainees, it might be suggested to practice part of the training tasks in blocks.

  5. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Granularity and the Acquisition of Grammatical Gender: How Order-of-Acquisition Affects What Gets Learned

    ERIC Educational Resources Information Center

    Arnon, Inbal; Ramscar, Michael

    2012-01-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost…

  7. A New World Order: Connecting Adult Developmental Theory to Learning Disabilities.

    ERIC Educational Resources Information Center

    Price, Lynda; Patton, James R.

    2003-01-01

    This article explores new connections between the current literature base on adult developmental theory and the field of learning disabilities. Emphasis is on theory and practice in self-determination and adult development. Implications for special education, vocational education, general education, and adult learning are discussed. (Contains…

  8. PBL-GIS in Secondary Geography Education: Does It Result in Higher-Order Learning Outcomes?

    ERIC Educational Resources Information Center

    Liu, Yan; Bui, Elisabeth N.; Chang, Chew-Hung; Lossman, Hans G.

    2010-01-01

    This article presents research on evaluating problem-based learning using GIS technology in a Singapore secondary school. A quasi-experimental research design was carried to test the PBL pedagogy (PBL-GIS) with an experimental group of students and compare their learning outcomes with a control group who were exposed to PBL but not GIS. The…

  9. A New World Order: Connecting Adult Developmental Theory to Learning Disabilities.

    ERIC Educational Resources Information Center

    Price, Lynda; Patton, James R.

    2003-01-01

    This article explores new connections between the current literature base on adult developmental theory and the field of learning disabilities. Emphasis is on theory and practice in self-determination and adult development. Implications for special education, vocational education, general education, and adult learning are discussed. (Contains…

  10. Granularity and the Acquisition of Grammatical Gender: How Order-of-Acquisition Affects What Gets Learned

    ERIC Educational Resources Information Center

    Arnon, Inbal; Ramscar, Michael

    2012-01-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost…

  11. Enhancing students' higher order thinking skills through computer-based scaffolding in problem-based learning

    NASA Astrophysics Data System (ADS)

    Kim, Nam Ju

    This multiple paper dissertation addressed several issues in Problem-based learning (PBL) through conceptual analysis, meta-analysis, and empirical research. PBL is characterized by ill-structured tasks, self-directed learning process, and a combination of individual and cooperative learning activities. Students who lack content knowledge and problem-solving skills may struggle to address associated tasks that are beyond their current ability levels in PBL. This dissertation addressed a) scaffolding characteristics (i.e., scaffolding types, delivery method, customization) and their effects on students' perception of optimal challenge in PBL, b) the possibility of virtual learning environments for PBL, and c) the importance of information literacy for successful PBL learning. Specifically, this dissertation demonstrated the effectiveness of scaffolding customization (i.e., fading, adding, and fading/adding) to enhance students' self-directed learning in PBL. Moreover, the effectiveness of scaffolding was greatest when scaffolding customization is self-selected than based on fixed-time interval and their performance. This suggests that it might be important for students to take responsibility for their learning in PBL and individualized and just-in-time scaffolding can be one of the solutions to address K-12 students' difficulties in improving problem-solving skills and adjusting to PBL.

  12. Granularity and the acquisition of grammatical gender: how order-of-acquisition affects what gets learned.

    PubMed

    Arnon, Inbal; Ramscar, Michael

    2012-03-01

    Why do adult language learners typically fail to acquire second languages with native proficiency? Does prior linguistic experience influence the size of the "units" adults attend to in learning, and if so, how does this influence what gets learned? Here, we examine these questions in relation to grammatical gender, which adult learners almost invariably struggle to master. We present a model of learning that predicts that exposure to smaller units (such as nouns) before exposure to larger linguistic units (such as sentences) can critically impair learning about predictive relations between units: such as that between a noun and its article. This prediction is then confirmed by a study of adult participants learning grammatical gender in an artificial language. Adults learned both nouns and their articles better when they were first heard nouns used in context with their articles prior to hearing the nouns individually, compared with learners who first heard the nouns in isolation, prior to hearing them used in context. In the light of these results, we discuss the role gender appears to play in language, the importance of meaning in artificial grammar learning, and the implications of this work for the structure of L2-training.

  13. Learning higher-order generalizations through free play: Evidence from 2- and 3-year-old children.

    PubMed

    Sim, Zi L; Xu, Fei

    2017-04-01

    Constructivist views of cognitive development often converge on 2 key points: (a) the child's goal is to build large conceptual structures for understanding the world, and (b) the child plays an active role in developing these structures. While previous research has demonstrated that young children show a precocious capacity for concept and theory building when they are provided with helpful data within training settings, and that they explore their environment in ways that may promote learning, it remains an open question whether young children are able to build larger conceptual structures using self-generated evidence, a form of active learning. In the current study, we examined whether children can learn high-order generalizations (which form the basis for larger conceptual structures) through free play, and whether they can do so as effectively as when provided with relevant data. Results with 2- and 3-year-old children over 4 experiments indicate robust learning through free play, and generalization performance was comparable between free play and didactic conditions. Therefore, young children's self-directed learning supports the development of higher-order generalizations, laying the foundation for building larger conceptual structures and intuitive theories. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. A Bayesian model of biases in artificial language learning: the case of a word-order universal.

    PubMed

    Culbertson, Jennifer; Smolensky, Paul

    2012-01-01

    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners' input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word-order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross-linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners' inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization-Greenberg's Universal 18-which bans a particular word-order pattern relating nouns, adjectives, and numerals. Copyright © 2012 Cognitive Science Society, Inc.

  15. Auditory and audiovisual inhibition of return.

    PubMed

    Spence, C; Driver, J

    1998-01-01

    Two experiments examined any inhibition-of-return (IOR) effects from auditory cues and from preceding auditory targets upon reaction times (RTs) for detecting subsequent auditory targets. Auditory RT was delayed if the preceding auditory cue was on the same side as the target, but was unaffected by the location of the auditory target from the preceding trial, suggesting that response inhibition for the cue may have produced its effects. By contrast, visual detection RT was inhibited by the ipsilateral presentation of a visual target on the preceding trial. In a third experiment, targets could be unpredictably auditory or visual, and no peripheral cues intervened. Both auditory and visual detection RTs were now delayed following an ipsilateral versus contralateral target in either modality on the preceding trial, even when eye position was monitored to ensure central fixation throughout. These data suggest that auditory target-target IOR arises only when target modality is unpredictable. They also provide the first unequivocal evidence for cross-modal IOR, since, unlike other recent studies (e.g., Reuter-Lorenz, Jha, & Rosenquist, 1996; Tassinari & Berlucchi, 1995; Tassinari & Campara, 1996), the present cross-modal effects cannot be explained in terms of response inhibition for the cue. The results are discussed in relation to neurophysiological studies and audiovisual links in saccade programming.

  16. Promoting Higher Order Thinking Skills via IPTEACES e-Learning Framework in the Learning of Information Systems Units

    ERIC Educational Resources Information Center

    Isaias, Pedro; Issa, Tomayess; Pena, Nuno

    2014-01-01

    When developing and working with various types of devices from a supercomputer to an iPod Mini, it is essential to consider the issues of Human Computer Interaction (HCI) and Usability. Developers and designers must incorporate HCI, Usability and user satisfaction in their design plans to ensure that systems are easy to learn, effective,…

  17. Promoting Higher Order Thinking Skills via IPTEACES e-Learning Framework in the Learning of Information Systems Units

    ERIC Educational Resources Information Center

    Isaias, Pedro; Issa, Tomayess; Pena, Nuno

    2014-01-01

    When developing and working with various types of devices from a supercomputer to an iPod Mini, it is essential to consider the issues of Human Computer Interaction (HCI) and Usability. Developers and designers must incorporate HCI, Usability and user satisfaction in their design plans to ensure that systems are easy to learn, effective,…

  18. The Effects of Variation on Learning Word Order Rules by Adults with and without Language-Based Learning Disabilities

    ERIC Educational Resources Information Center

    Grunow, Hope; Spaulding, Tammie J.; Gomez, Rebecca L.; Plante, Elena

    2006-01-01

    Non-adjacent dependencies characterize numerous features of English syntax, including certain verb tense structures and subject-verb agreement. This study utilized an artificial language paradigm to examine the contribution of item variability to the learning of these types of dependencies. Adult subjects with and without language-based learning…

  19. 37 CFR 202.22 - Acquisition and deposit of unpublished audio and audiovisual transmission programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... unpublished audio and audiovisual transmission programs. 202.22 Section 202.22 Patents, Trademarks, and... REGISTRATION OF CLAIMS TO COPYRIGHT § 202.22 Acquisition and deposit of unpublished audio and audiovisual... and copies of unpublished audio and audiovisual transmission programs by the Library of Congress under...

  20. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  1. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... unscheduled audiovisual records. (1) General guidance. Keep all film in cold storage following guidance by the... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED...

  2. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... unscheduled audiovisual records. (1) General guidance. Keep all film in cold storage following guidance by the... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED...

  3. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  4. Order of selection in vocational rehabilitation: implications for the transition from school to adult outcomes for youths with learning disabilities.

    PubMed

    Bellini, James; Royce-Davis, Joanna

    1999-01-01

    Interagency cooperation between special education and vocational rehabilitation (VR) is central to ensuring the continuity of services to young adults with disabilities who are in transition from school to adult living. However, the interface between special education and VR may be complicated by order of selection, an equally binding mandate in federal VR policy to provide priority services to individuals with the most severe disabilities. Because students with learning disabilities are typically perceived as having mild rather than severe disabilities, these youths are most at risk for falling through the cracks in the service landscape once they leave the school setting in states where the VR agency is implementing an order of selection procedure. This article identifies and discusses common impediments to collaborative transition planning for students with learning disabilities that may be intensified when the state VR agency is operating under an order of selection plan. Recommendations are provided to facilitate greater interagency cooperation among schools and VR agencies so that transition planning and implementation for students with learning disabilities is not subverted as a result of the order of selection mandate.

  5. The Black Record: A Selective Discography of Afro-Americana on Audio Discs Held by the Audio/Visual Department, John M. Olin Library.

    ERIC Educational Resources Information Center

    Dain, Bernice, Comp.; Nevin, David, Comp.

    The present revised and expanded edition of this document is an inclusive cumulation. A few items have been included which are on order as new to the collection or as replacements. This discography is intended to serve primarily as a local user's guide. The call number preceding each entry is based on the Audio-Visual Department's own, unique…

  6. Audiovisual attention boosts letter-speech sound integration.

    PubMed

    Mittag, Maria; Alho, Kimmo; Takegata, Rika; Makkonen, Tommi; Kujala, Teija

    2013-10-01

    We studied attention effects on the integration of written and spoken syllables in fluent adult readers by using event-related brain potentials. Auditory consonant-vowel syllables, including consonant and frequency changes, were presented in synchrony with written syllables or their scrambled images. Participants responded to longer-duration auditory targets (auditory attention), longer-duration visual targets (visual attention), longer-duration auditory and visual targets (audiovisual attention), or counted backwards mentally. We found larger negative responses for spoken consonant changes when they were accompanied by written syllables than when they were accompanied by scrambled text. This effect occurred at an early latency (∼ 140 ms) during audiovisual attention and later (∼ 200 ms) during visual attention. Thus, audiovisual attention boosts the integration of speech sounds and letters. Copyright © 2013 Society for Psychophysiological Research.

  7. Audio-visual temporal perception in children with restored hearing.

    PubMed

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  8. Neural correlates of audiovisual speech processing in a second language.

    PubMed

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  9. Learning to represent spatial transformations with factored higher-order Boltzmann machines.

    PubMed

    Memisevic, Roland; Hinton, Geoffrey E

    2010-06-01

    To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.

  10. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  11. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  12. Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing

    PubMed Central

    Hwang, Jaewon

    2015-01-01

    During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614

  13. Audiovisual biofeedback improves motion prediction accuracy.

    PubMed

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system

  14. Temporal causal inference with stochastic audiovisual sequences.

    PubMed

    Locke, Shannon M; Landy, Michael S

    2017-01-01

    Integration of sensory information across multiple senses is most likely to occur when signals are spatiotemporally coupled. Yet, recent research on audiovisual rate discrimination indicates that random sequences of light flashes and auditory clicks are integrated optimally regardless of temporal correlation. This may be due to 1) temporal averaging rendering temporal cues less effective; 2) difficulty extracting causal-inference cues from rapidly presented stimuli; or 3) task demands prompting integration without concern for the spatiotemporal relationship between the signals. We conducted a rate-discrimination task (Exp 1), using slower, more random sequences than previous studies, and a separate causal-judgement task (Exp 2). Unisensory and multisensory rate-discrimination thresholds were measured in Exp 1 to assess the effects of temporal correlation and spatial congruence on integration. The performance of most subjects was indistinguishable from optimal for spatiotemporally coupled stimuli, and generally sub-optimal in other conditions, suggesting observers used a multisensory mechanism that is sensitive to both temporal and spatial causal-inference cues. In Exp 2, subjects reported whether temporally uncorrelated (but spatially co-located) sequences were perceived as sharing a common source. A unified percept was affected by click-flash pattern similarity and the maximum temporal offset between individual clicks and flashes, but not on the proportion of synchronous click-flash pairs. A simulation analysis revealed that the stimulus-generation algorithms of previous studies is likely responsible for the observed integration of temporally independent sequences. By combining results from Exps 1 and 2, we found better rate-discrimination performance for sequences that are more likely to be integrated than those that are not. Our results support the principle that multisensory stimuli are optimally integrated when spatiotemporally coupled, and provide insight

  15. Reward expectation influences audiovisual spatial integration.

    PubMed

    Bruns, Patrick; Maiworm, Mario; Röder, Brigitte

    2014-08-01

    In order to determine the spatial location of an object that is simultaneously seen and heard, the brain assigns higher weights to the sensory inputs that provide the most reliable information. For example, in the well-known ventriloquism effect, the perceived location of a sound is shifted toward the location of a concurrent but spatially misaligned visual stimulus. This perceptual illusion can be explained by the usually much higher spatial resolution of the visual system as compared to the auditory system. Recently, it has been demonstrated that this cross-modal binding process is not fully automatic, but can be modulated by emotional learning. Here we tested whether cross-modal binding is similarly affected by motivational factors, as exemplified by reward expectancy. Participants received a monetary reward for precise and accurate localization of brief auditory stimuli. Auditory stimuli were accompanied by task-irrelevant, spatially misaligned visual stimuli. Thus, the participants' motivational goal of maximizing their reward was put in conflict with the spatial bias of auditory localization induced by the ventriloquist situation. Crucially, the amounts of expected reward differed between the two hemifields. As compared to the hemifield associated with a low reward, the ventriloquism effect was reduced in the high-reward hemifield. This finding suggests that reward expectations modulate cross-modal binding processes, possibly mediated via cognitive control mechanisms. The motivational significance of the stimulus material, thus, constitutes an important factor that needs to be considered in the study of top-down influences on multisensory integration.

  16. Audiovisual congruency and incongruency effects on auditory intensity discrimination.

    PubMed

    Guo, Xiaoli; Li, Xuan; Ge, Xiaoli; Tong, Shanbao

    2015-01-01

    This study used a S1-S2 matching paradigm to investigate the influences of visual (size) change on auditory intensity discrimination. Behavioral results showed that subjects made more errors and spent more time to discriminate change in auditory intensity when it was accompanied by an incongruent visual change, while the performance for congruent audiovisual stimuli was better especially if there is a change in auditory stimuli. Event-related potential difference waves revealed that audiovisual interactions for multimodal mismatched information processing activated the right frontal and left centro-parietal cortices around 300-400 ms post S1-onset. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  18. Rhesus Monkeys (Macaca Mulatta) Maintain Learning Set Despite Second-Order Stimulus-Response Spatial Discontiguity

    ERIC Educational Resources Information Center

    Beran, Michael J.; Washburn, David A.; Rumbaugh, Duane M.

    2007-01-01

    In many discrimination-learning tests, spatial separation between stimuli and response loci disrupts performance in rhesus macaques. However, monkeys are unaffected by such stimulus-response spatial discontiguity when responses occur through joystick-based computerized movement of a cursor. To examine this discrepancy, five monkeys were tested on…

  19. Authentic Role-Playing as Situated Learning: Reframing Teacher Education Methodology for Higher-Order Thinking

    ERIC Educational Resources Information Center

    Leaman, Lori Hostetler; Flanagan, Toni Michele

    2013-01-01

    This article draws from situated learning theory, teacher education research, and the authors' collaborative self-study to propose a teacher education pedagogy that may help to bridge the theory-into-practice gap for preservice teachers. First, we review the Interstate Teacher Assessment and Support Consortium standards to confirm the call for…

  20. Complimentary lower-level and higher-order systems underpin imitation learning.

    PubMed

    Andrew, Matthew; Bennett, Simon J; Elliott, Digby; Hayes, Spencer J

    2016-04-01

    We examined whether the temporal representation developed during motor training with reduced-frequency knowledge of results (KR; feedback available on every other trial) was transferred to an imitation learning task. To this end, four groups first practised a three-segment motor sequence task with different KR protocols. Two experimental groups received reduced-frequency KR, one group received high-frequency KR (feedback available on every trial), and one received no-KR. Compared to the no-KR group, the groups that received KR learned the temporal goal of the movement sequence, as evidenced by increased accuracy and consistency across training. Next, all groups learned a single-segment movement that had the same temporal goal as the motor sequence task but required the imitation of biological and nonbiological motion kinematics. Kinematic data showed that whilst all groups imitated biological motion kinematics, the two experimental reduced-frequency KR groups were on average ∼ 800 ms more accurate at imitating movement time than the high-frequency KR and no-KR groups. The interplay between learning biological motion kinematics and the transfer of temporal representation indicates imitation involves distinct, but complementary lower-level sensorimotor and higher-level cognitive processing systems.

  1. Pupil Perceptions of Learning with Artists: A New Order of Experience?

    ERIC Educational Resources Information Center

    Burnard, Pamela; Swann, Mandy

    2010-01-01

    For many years schools have employed visiting artists to work with pupils on project-based activities. While there is no lack of evidence of the capacity of some artists to motivate pupils, there is little extant research that identifies how pupils describe their experience of learning with artists who champion contemporary arts practice. This…

  2. Rhesus Monkeys (Macaca Mulatta) Maintain Learning Set Despite Second-Order Stimulus-Response Spatial Discontiguity

    ERIC Educational Resources Information Center

    Beran, Michael J.; Washburn, David A.; Rumbaugh, Duane M.

    2007-01-01

    In many discrimination-learning tests, spatial separation between stimuli and response loci disrupts performance in rhesus macaques. However, monkeys are unaffected by such stimulus-response spatial discontiguity when responses occur through joystick-based computerized movement of a cursor. To examine this discrepancy, five monkeys were tested on…

  3. Assessment of Student Learning in Virtual Spaces, Using Orders of Complexity in Levels of Thinking

    ERIC Educational Resources Information Center

    Capacho, Jose

    2017-01-01

    This paper aims at showing a new methodology to assess student learning in virtual spaces supported by Information and Communications Technology-ICT. The methodology is based on the Conceptual Pedagogy Theory, and is supported both on knowledge instruments (KI) and intelectual operations (IO). KI are made up of teaching materials embedded in the…

  4. Statistical learning of an auditory sequence and reorganization of acquired knowledge: A time course of word segmentation and ordering.

    PubMed

    Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato

    2017-01-27

    Previous neural studies have supported the hypothesis that statistical learning mechanisms are used broadly across different domains such as language and music. However, these studies have only investigated a single aspect of statistical learning at a time, such as recognizing word boundaries or learning word order patterns. In this study, we neutrally investigated how the two levels of statistical learning for recognizing word boundaries and word ordering could be reflected in neuromagnetic responses and how acquired statistical knowledge is reorganised when the syntactic rules are revised. Neuromagnetic responses to the Japanese-vowel sequence (a, e, i, o, and u), presented every .45s, were recorded from 14 right-handed Japanese participants. The vowel order was constrained by a Markov stochastic model such that five nonsense words (aue, eao, iea, oiu, and uoi) were chained with an either-or rule: the probability of the forthcoming word was statistically defined (80% for one word; 20% for the other word) by the most recent two words. All of the word transition probabilities (80% and 20%) were switched in the middle of the sequence. In the first and second quarters of the sequence, the neuromagnetic responses to the words that appeared with higher transitional probability were significantly reduced compared with those that appeared with a lower transitional probability. After switching the word transition probabilities, the response reduction was replicated in the last quarter of the sequence. The responses to the final vowels in the words were significantly reduced compared with those to the initial vowels in the last quarter of the sequence. The results suggest that both within-word and between-word statistical learning are reflected in neural responses. The present study supports the hypothesis that listeners learn larger structures such as phrases first, and they subsequently extract smaller structures, such as words, from the learned phrases. The present

  5. Evaluating the influence of the 'unity assumption' on the temporal perception of realistic audiovisual stimuli.

    PubMed

    Vatakis, Argiro; Spence, Charles

    2008-01-01

    Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.

  6. Higher Order Thinking Skills among Secondary School Students in Science Learning

    ERIC Educational Resources Information Center

    Saido, Gulistan Mohammed; Siraj, Saedah; Bin Nordin, Abu Bakar; Al Amedy, Omed Saadallah

    2015-01-01

    A central goal of science education is to help students to develop their higher order thinking skills to enable them to face the challenges of daily life. Enhancing students' higher order thinking skills is the main goal of the Kurdish Science Curriculum in the Iraqi-Kurdistan region. This study aimed at assessing 7th grade students' higher order…

  7. Assessment of Higher Order Thinking Skills. Current Perspectives on Cognition, Learning and Instruction

    ERIC Educational Resources Information Center

    Schraw, Gregory, Ed.; Robinson, Daniel H., Ed.

    2011-01-01

    This volume examines the assessment of higher order thinking skills from the perspectives of applied cognitive psychology and measurement theory. The volume considers a variety of higher order thinking skills, including problem solving, critical thinking, argumentation, decision making, creativity, metacognition, and self-regulation. Fourteen…

  8. Authentic Instruction for 21st Century Learning: Higher Order Thinking in an Inclusive School

    ERIC Educational Resources Information Center

    Preus, Betty

    2012-01-01

    The author studied a public junior high school identified as successfully implementing authentic instruction. Such instruction emphasizes higher order thinking, deep knowledge, substantive conversation, and value beyond school. To determine in what ways higher order thinking was fostered both for students with and without disabilities, the author…

  9. Assessment of Higher Order Thinking Skills. Current Perspectives on Cognition, Learning and Instruction

    ERIC Educational Resources Information Center

    Schraw, Gregory, Ed.; Robinson, Daniel H., Ed.

    2011-01-01

    This volume examines the assessment of higher order thinking skills from the perspectives of applied cognitive psychology and measurement theory. The volume considers a variety of higher order thinking skills, including problem solving, critical thinking, argumentation, decision making, creativity, metacognition, and self-regulation. Fourteen…

  10. Classroom Order and Student Learning in Late Elementary School: A Multilevel Transactional Model of Achievement Trajectories

    ERIC Educational Resources Information Center

    Gaskins, Clare S.; Herres, Joanna; Kobak, Roger

    2012-01-01

    This study examines the association between classroom order in 4th and 5th grades and student achievement growth over a school year. A three level transactional model tested the effects of classroom order on students' rates of growth in math and reading during the school year controlling for starting achievement levels, student risk factors, and…

  11. Summary of Findings and Recommendations on Federal Audiovisual Activities.

    ERIC Educational Resources Information Center

    Lissit, Robert; And Others

    At the direction of President Carter, a year-long study of government audiovisual programs was conducted out of the Office of Telecommunications Policy in the Executive Office of the President. The programs in 16 departments and independent agencies, and the departments of the Army, Navy, and Air Force have been reviewed to identify the scope of…

  12. Selected Audio-Visual Materials for Consumer Education. [New Version.

    ERIC Educational Resources Information Center

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  13. Audio-Visual Equipment Depreciation. RDU-75-07.

    ERIC Educational Resources Information Center

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  14. Recent Audio-Visual Materials on the Soviet Union.

    ERIC Educational Resources Information Center

    Clarke, Edith Campbell

    1981-01-01

    Identifies and describes audio-visual materials (films, filmstrips, and audio cassette tapes) about the Soviet Union which have been produced since 1977. For each entry, information is presented on title, time required, date of release, cost (purchase and rental), and an abstract. (DB)

  15. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.

  16. Audiovisual Integration in Noise by Children and Adults

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  17. Producing Slide and Tape Presentations: Readings from "Audiovisual Instruction"--4.

    ERIC Educational Resources Information Center

    Hitchens, Howard, Ed.

    Designed to serve as a reference and source of ideas on the use of slides in combination with audiocassettes for presentation design, this book of readings from Audiovisual Instruction magazine includes three papers providing basic tips on putting together a presentation, five articles describing techniques for improving the visual images, five…

  18. A Guide to the Literature on Audiovisual Instruction.

    ERIC Educational Resources Information Center

    Dale, Edgar; Belland, John

    The philosophical overview which introduces this review of the literature on audiovisual instruction concentrates on the historical background of the field and its place within the broader fields of communication, education, and communication theory and research. A selected bibliography is provided for the major areas in instructional technology:…

  19. Adaptation to audiovisual asynchrony modulates the speeded detection of sound

    PubMed Central

    Navarra, Jordi; Hartcher-O'Brien, Jessica; Piazza, Elise; Spence, Charles

    2009-01-01

    The brain adapts to asynchronous audiovisual signals by reducing the subjective temporal lag between them. However, it is currently unclear which sensory signal (visual or auditory) shifts toward the other. According to the idea that the auditory system codes temporal information more precisely than the visual system, one should expect to find some temporal shift of vision toward audition (as in the temporal ventriloquism effect) as a result of adaptation to asynchronous audiovisual signals. Given that visual information gives a more exact estimate of the time of occurrence of distal events than auditory information (due to the fact that the time of arrival of visual information regarding an external event is always closer to the time at which this event occurred), the opposite result could also be expected. Here, we demonstrate that participants' speeded reaction times (RTs) to auditory (but, critically, not visual) stimuli are altered following adaptation to asynchronous audiovisual stimuli. After receiving “baseline” exposure to synchrony, participants were exposed either to auditory-lagging asynchrony (VA group) or to auditory-leading asynchrony (AV group). The results revealed that RTs to sounds became progressively faster (in the VA group) or slower (in the AV group) as participants' exposure to asynchrony increased, thus providing empirical evidence that speeded responses to sounds are influenced by exposure to audiovisual asynchrony. PMID:19458252

  20. Audiovisual Integration in Noise by Children and Adults

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  1. Summary of Findings and Recommendations on Federal Audiovisual Activities.

    ERIC Educational Resources Information Center

    Lissit, Robert; And Others

    At the direction of President Carter, a year-long study of government audiovisual programs was conducted out of the Office of Telecommunications Policy in the Executive Office of the President. The programs in 16 departments and independent agencies, and the departments of the Army, Navy, and Air Force have been reviewed to identify the scope of…

  2. Audiovisual Aids and Techniques in Managerial and Supervisory Training.

    ERIC Educational Resources Information Center

    Rigg, Robinson P.

    An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…

  3. Guide to Audiovisual Terminology. Product Information Supplement, Number 6.

    ERIC Educational Resources Information Center

    Trzebiatowski, Gregory, Ed.

    1968-01-01

    The terms appearing in this glossary have been specifically selected for use by educators from a larger text, which was prepared by the Commission on Definition and Terminology of the Department of Audiovisual Instruction of the National Education Association. Specialized areas covered in the glossary include audio reproduction, audiovisual…

  4. Facilitating Personality Change with Audiovisual Self-confrontation and Interviews.

    ERIC Educational Resources Information Center

    Alker, Henry A.; And Others

    Two studies are reported, each of which achieves personality change with both audiovisual self-confrontation (AVSC) and supportive, nondirective interviews. The first study used Ericksonian identity achievement as a dependent variable. Sixty-one male subjects were measured using Anne Constantinople's inventory. The results of this study…

  5. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  6. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  7. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  8. Auditory Event-Related Potentials (ERPs) in Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Pilling, Michael

    2009-01-01

    Purpose: It has recently been reported (e.g., V. van Wassenhove, K. W. Grant, & D. Poeppel, 2005) that audiovisual (AV) presented speech is associated with an N1/P2 auditory event-related potential (ERP) response that is lower in peak amplitude compared with the responses associated with auditory only (AO) speech. This effect was replicated.…

  9. Audio-Visual Training in Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Magnan, Annie; Ecalle, Jean

    2006-01-01

    This study tested the effectiveness of audio-visual training in the discrimination of the phonetic feature of voicing on the recognition of written words by young children deemed to at risk of dyslexia (experiment 1) as well as on dyslexic children's phonological skills (experiment 2). In addition, the third experiment studied the effectiveness of…

  10. Audio-Visual Space Reorganization Study. RDU-75-05.

    ERIC Educational Resources Information Center

    Baker, Martha

    Space layout and work flow patterns in the Audiovisual Center at Purdue University were studied with respect to effective space utilization and the need for planning space requirements in relationship to the activities being performed. Space and work areas were reorganized to facilitate the flow of work and materials between areas, and equipment…

  11. Audiovisual Holdings of the Presidential Libraries. Preliminary Draft.

    ERIC Educational Resources Information Center

    National Archives and Records Service (GSA), Washington, DC.

    The Presidential Libraries are administered by the National Archives and Records Service and contain many audiovisual materials obtained from a variety of sources--television networks, local television stations, federal agencies, radio networks, private citizens, newsreel companies, private businesses, special interest groups, news photo agencies,…

  12. PRECIS for Subject Access in a National Audiovisual Information System.

    ERIC Educational Resources Information Center

    Bidd, Donald; And Others

    1986-01-01

    This overview of PRECIS indexing system use by the National Film Board of Canada covers reasons for its choice, challenge involved in subject analysis and indexing of audiovisual documents, the methodology and software used to process PRECIS records, the resulting catalog subject indexes, and user reaction. Twenty-one references are cited. (EJS)

  13. Audiovisual Aids and Techniques in Managerial and Supervisory Training.

    ERIC Educational Resources Information Center

    Rigg, Robinson P.

    An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…

  14. Media Literacy and Audiovisual Languages: A Case Study from Belgium

    ERIC Educational Resources Information Center

    Van Bauwel, Sofie

    2008-01-01

    This article examines the use of media in the construction of a "new" language for children. We studied how children acquire and use media literacy skills through their engagement in an educational art project. This media literacy project is rooted in the realm of audiovisual media, within which children's sound and visual worlds are the…

  15. Auditory Event-Related Potentials (ERPs) in Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Pilling, Michael

    2009-01-01

    Purpose: It has recently been reported (e.g., V. van Wassenhove, K. W. Grant, & D. Poeppel, 2005) that audiovisual (AV) presented speech is associated with an N1/P2 auditory event-related potential (ERP) response that is lower in peak amplitude compared with the responses associated with auditory only (AO) speech. This effect was replicated.…

  16. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  17. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  18. Design for Safety: The Audiovisual Cart Hazard Revisited.

    ERIC Educational Resources Information Center

    Sherry, Annette C.; Strojny, Allan

    1993-01-01

    Discussion of the design of carts for moving audiovisual equipment in schools emphasizes safety factors. Topics addressed include poor design of top-heavy carts that has led to deaths and injuries; cart navigation; new manufacturing standards; and an alternative, safer cart design. (Contains 13 references.) (LRW)

  19. Design for Safety: The Audiovisual Cart Hazard Revisited.

    ERIC Educational Resources Information Center

    Sherry, Annette C.; Strojny, Allan

    1993-01-01

    Discussion of the design of carts for moving audiovisual equipment in schools emphasizes safety factors. Topics addressed include poor design of top-heavy carts that has led to deaths and injuries; cart navigation; new manufacturing standards; and an alternative, safer cart design. (Contains 13 references.) (LRW)

  20. Audiovisual vs Paper-and-Pencil Testing of Occupational Competence.

    ERIC Educational Resources Information Center

    Brittain, Clay V.; Brittain, Mary M.

    An exploratory application of audio-visual media to the occupation proficiency testing of enlisted soldiers is described. A five-response console which had been developed as a laboratory model was used. A DataQuest machine was adapted for delivery of a multiple choice test of job knowledge. In being tested, the soldier sat in front of the console…

  1. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  2. Selected Bibliography and Audiovisual Materials for Environmental Education.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Instruction.

    This guide to resource materials on environmental education is in two sections: 1) Selected Bibliography of Printed Materials, compiled in April, 1970; and, 2) Audio-Visual materials, Films and Filmstrips, compiled in February, 1971. 99 book annotations are given with an indicator of elementary, junior or senior high school levels. Other book…

  3. A Guide to the Literature of Audiovisual Education.

    ERIC Educational Resources Information Center

    Lewis, John P., Jr.

    Although this generously annotated selective bibliography primarily emphasizes audiovisual reference works of interest to educational researchers, a secondary emphasis is on publications in various specific subject areas. In addition to their value for researchers, the latter materials are of potential interest to educators planning to use or…

  4. Effect of Audiovisual Cancer Programs on Patients and Families.

    ERIC Educational Resources Information Center

    Cassileth, Barrie R.; And Others

    1982-01-01

    Four audiovisual programs about cancer and cancer treatment were evaluated. Cancer patients, their families, and friends were asked to complete questionnaires before and after watching a program to determine the effects of the program on their knowledge of cancer, anxiety levels, and perceived ability to communicate with the staff. (Author/MLW)

  5. Bi-directional audiovisual influences on temporal modulation discrimination.

    PubMed

    Varghese, Leonard; Mathias, Samuel R; Bensussen, Seth; Chou, Kenny; Goldberg, Hannah R; Sun, Yile; Sekuler, Robert; Shinn-Cunningham, Barbara G

    2017-04-01

    Cross-modal interactions of auditory and visual temporal modulation were examined in a game-like experimental framework. Participants observed an audiovisual stimulus (an animated, sound-emitting fish) whose sound intensity and/or visual size oscillated sinusoidally at either 6 or 7 Hz. Participants made speeded judgments about the modulation rate in either the auditory or visual modality while doing their best to ignore information from the other modality. Modulation rate in the task-irrelevant modality matched the modulation rate in the task-relevant modality (congruent conditions), was at the other rate (incongruent conditions), or had no modulation (unmodulated conditions). Both performance accuracy and parameter estimates from drift-diffusion decision modeling indicated that (1) the presence of temporal modulation in both modalities, regardless of whether modulations were matched or mismatched in rate, resulted in audiovisual interactions; (2) congruence in audiovisual temporal modulation resulted in more reliable information processing; and (3) the effects of congruence appeared to be stronger when judging visual modulation rates (i.e., audition influencing vision), than when judging auditory modulation rates (i.e., vision influencing audition). The results demonstrate that audiovisual interactions from temporal modulations are bi-directional in nature, but with potential asymmetries in the size of the effect in each direction.

  6. An Audio-Visual Lecture Course in Russian Culture

    ERIC Educational Resources Information Center

    Leighton, Lauren G.

    1977-01-01

    An audio-visual course in Russian culture is given at Northern Illinois University. A collection of 4-5,000 color slides is the basis for the course, with lectures focussed on literature, philosophy, religion, politics, art and crafts. Acquisition, classification, storage and presentation of slides, and organization of lectures are discussed. (CHK)

  7. The mediodorsal thalamus as a higher order thalamic relay nucleus important for learning and decision-making.

    PubMed

    Mitchell, Anna S

    2015-07-01

    Recent evidence from monkey models of cognition shows that the magnocellular subdivision of the mediodorsal thalamus (MDmc) is more critical for learning new information than for retention of previously acquired information. Further, consistent evidence in animal models shows the mediodorsal thalamus (MD) contributes to adaptive decision-making. It is assumed that prefrontal cortex (PFC) and medial temporal lobes govern these cognitive processes so this evidence suggests that MD contributes a role in these cognitive processes too. Anatomically, the MD has extensive excitatory cortico-thalamo-cortical connections, especially with the PFC. MD also receives modulatory inputs from forebrain, midbrain and brainstem regions. It is suggested that the MD is a higher order thalamic relay of the PFC due to the dual cortico-thalamic inputs from layer V ('driver' inputs capable of transmitting a message) and layer VI ('modulator' inputs) of the PFC. Thus, the MD thalamic relay may support the transfer of information across the PFC via this indirect thalamic route. This review summarizes the current knowledge about the anatomy of MD as a higher order thalamic relay. It also reviews behavioral and electrophysiological studies in animals to consider how MD might support the transfer of information across the cortex during learning and decision-making. Current evidence suggests the MD is particularly important during rapid trial-by-trial associative learning and decision-making paradigms that involve multiple cognitive processes. Further studies need to consider the influence of the MD higher order relay to advance our knowledge about how the cortex processes higher order cognition.

  8. Cogging effect minimization in PMSM position servo system using dual high-order periodic adaptive learning compensation.

    PubMed

    Luo, Ying; Chen, Yangquan; Pi, Youguo

    2010-10-01

    Cogging effect which can be treated as a type of position-dependent periodic disturbance, is a serious disadvantage of the permanent magnetic synchronous motor (PMSM). In this paper, based on a simulation system model of PMSM position servo control, the cogging force, viscous friction, and applied load in the real PMSM control system are considered and presented. A dual high-order periodic adaptive learning compensation (DHO-PALC) method is proposed to minimize the cogging effect on the PMSM position and velocity servo system. In this DHO-PALC scheme, more than one previous periods stored information of both the composite tracking error and the estimate of the cogging force is used for the control law updating. Asymptotical stability proof with the proposed DHO-PALC scheme is presented. Simulation is implemented on the PMSM servo system model to illustrate the proposed method. When the constant speed reference is applied, the DHO-PALC can achieve a faster learning convergence speed than the first-order periodic adaptive learning compensation (FO-PALC). Moreover, when the designed reference signal changes periodically, the proposed DHO-PALC can obtain not only faster convergence speed, but also much smaller final error bound than the FO-PALC. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  9. The Max-Min High-Order Dynamic Bayesian Network for Learning Gene Regulatory Networks with Time-Delayed Regulations.

    PubMed

    Li, Yifeng; Chen, Haifen; Zheng, Jie; Ngom, Alioune

    2016-01-01

    Accurately reconstructing gene regulatory network (GRN) from gene expression data is a challenging task in systems biology. Although some progresses have been made, the performance of GRN reconstruction still has much room for improvement. Because many regulatory events are asynchronous, learning gene interactions with multiple time delays is an effective way to improve the accuracy of GRN reconstruction. Here, we propose a new approach, called Max-Min high-order dynamic Bayesian network (MMHO-DBN) by extending the Max-Min hill-climbing Bayesian network technique originally devised for learning a Bayesian network's structure from static data. Our MMHO-DBN can explicitly model the time lags between regulators and targets in an efficient manner. It first uses constraint-based ideas to limit the space of potential structures, and then applies search-and-score ideas to search for an optimal HO-DBN structure. The performance of MMHO-DBN to GRN reconstruction was evaluated using both synthetic and real gene expression time-series data. Results show that MMHO-DBN is more accurate than current time-delayed GRN learning methods, and has an intermediate computing performance. Furthermore, it is able to learn long time-delayed relationships between genes. We applied sensitivity analysis on our model to study the performance variation along different parameter settings. The result provides hints on the setting of parameters of MMHO-DBN.

  10. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  11. Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics

    NASA Astrophysics Data System (ADS)

    Iyer, V.; Shetty, S.; Iyengar, S. S.

    2015-07-01

    Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.

  12. Context-specific effects of musical expertise on audiovisual integration

    PubMed Central

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  13. Neural correlates of audiovisual integration in music reading.

    PubMed

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Context-specific effects of musical expertise on audiovisual integration.

    PubMed

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  15. Planning Schools for Use of Audio-Visual Materials. No. 3: The Audio-Visual Materials Center.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC. Dept. of Audiovisual Instruction.

    This manual discusses the role, organizational patterns, expected services, and space and housing needs of the audio-visual instructional materials center. In considering the housing of basic functions, photographs, floor layouts, diagrams, and specifications of equipment are presented. An appendix includes a 77-item bibliography, a 7-page list of…

  16. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  17. An Internet Dialogue: Mandatory Student Community Service, Court-Ordered Volunteering, and Service-Learning.

    ERIC Educational Resources Information Center

    Ellis, Susan; And Others

    1998-01-01

    Excerpts from an Internet debate identify issues and opinions on mandatory community service as a graduation requirement and court-ordered volunteering. The debate ranges over such topics as quality of the service experience, freedom of choice, intended outcomes, and values conflicts. (SK)

  18. Promoting Positive Peer Interaction through Cooperative Learning, Community Building, Higher-Order Thinking and Conflict Management.

    ERIC Educational Resources Information Center

    Carlson, Kathryn R.

    Research shows that probable causes for disruptive classroom behavior are broken social bonds, violent environment, stress and conflict, and inadequate curriculum coupled with ineffective teaching methods. This report discusses a program to decrease negative peer interaction in order to improve academic achievement and interpersonal relationships.…

  19. 76 FR 15311 - Legacy Learning Systems, Inc.; Analysis of Proposed Consent Order To Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-21

    ... agreement's proposed order. The practices challenged in this case relate to the advertising of respondents... in articles, blog posts, or other online editorial copy that contained hyperlinks to respondents' Web... respondents, in connection with the advertising of any product or service, from misrepresenting the status...

  20. Problem-Based Learning and Use of Higher-Order Thinking by Emergency Medical Technicians

    ERIC Educational Resources Information Center

    Rosenberger, Paul

    2013-01-01

    Emergency Medical Technicians (EMTs) often handle chaotic life-and-death situations that require higher-order thinking skills. Improving the pass rate of EMT students depends on many factors, including the use of proven and effective teaching methods. Results from recent research about effective teaching have suggested that the instructional…

  1. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    NASA Astrophysics Data System (ADS)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  2. Voice over: Audio-visual congruency and content recall in the gallery setting.

    PubMed

    Fairhurst, Merle T; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

  3. Reversal learning in C58 mice: Modeling higher order repetitive behavior.

    PubMed

    Whitehouse, Cristina M; Curry-Pochy, Lisa S; Shafer, Robin; Rudy, Joseph; Lewis, Mark H

    2017-08-14

    Restricted, repetitive behaviors are diagnostic for autism and prevalent in other neurodevelopmental disorders. These behaviors cluster as repetitive sensory-motor behaviors and behaviors reflecting resistance to change. The C58 mouse strain is a promising model for these behaviors as it emits high rates of aberrant repetitive sensory-motor behaviors. The purpose of the present study was to extend characterization of the C58 model to resistance to change. This was done by comparing C58 to C57BL/6 mice on a reversal learning task under either a 100% or 80%/20% probabilistic reinforcement schedule. In addition, the effect of environmental enrichment on performance of this task was assessed as this rearing condition markedly reduces repetitive sensory-motor behavior in C58 mice. Little difference was observed between C58 and control mice under a 100% schedule of reinforcement. The 80%/20% probabilistic schedule of reinforcement generated substantial strain differences, however. Importantly, no strain difference was observed in acquisition, but C58 mice were markedly impaired in their ability to reverse their pattern of responding from the previously high density reinforcement side. Environmental enrichment did not impact acquisition under the probabilistic reinforcement schedule, but enriched C58 mice performed significantly better than standard housed C58 mice in reversal learning. Thus, C58 mice exhibit behaviors that reflect both repetitive sensory motor behaviors as well as behavior that reflects resistance to change. Moreover, both clusters of repetitive behavior were attenuated by environmental enrichment. Such findings, along with the reported social deficits in C58 mice, increase the translational value of this mouse model to autism. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The influence of trial order on learning from reward vs. punishment in a probabilistic categorization task: experimental and computational analyses.

    PubMed

    Moustafa, Ahmed A; Gluck, Mark A; Herzallah, Mohammad M; Myers, Catherine E

    2015-01-01

    Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward

  5. Simulation of Parkinsonian gait by fusing trunk learned patterns and a lower limb first order model

    NASA Astrophysics Data System (ADS)

    Cárdenas, Luisa; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    Parkinson's disease is a neurodegenerative disorder that progressively affects the movement. Gait analysis is therefore crucial to determine a disease degree as well as to orient the diagnosis. However, gait examination is completely subjective and therefore prone to errors or misinterpretations, even with a great expertise. In addition, the conventional evaluation follows up general gait variables, which amounts to ignore subtle changes that definitely can modify the history of the treatment. This work presents a functional gait model that simulates the center of gravity trajectory (CoG) for different Parkinson disease stages. This model mimics the gait trajectory by coupling two models: a double pendulum (single stance phase) and a spring-mass model (double stance). Realistic simulations for different Parkinson disease stages are then obtained by integrating to the model a set of trunk bending patterns, learned from real patients. The proposed model was compared with the CoG of real Parkinson gaits in stages 2, 3, 4 achieving a correlation coefficient of 0.88, 0.92 and 0.86, respectively.

  6. Dissociation of first- and second-order motion systems by perceptual learning

    PubMed Central

    Chubb, Charles

    2013-01-01

    Previous studies investigating transfer of perceptual learning between luminance-defined (LD) motion and texture-contrast-defined (CD) motion tasks have found little or no transfer from LD to CD motion tasks but nearly perfect transfer from CD to LD motion tasks. Here, we introduce a paradigm that yields a clean double dissociation: LD training yields no transfer to the CD task, but more interestingly, CD training yields no transfer to the LD task. Participants were trained in two variants of a global motion task. In one (LD) variant, motion was defined by tokens that differed from the background in mean luminance. In the other (CD) variant, motion was defined by tokens that had mean luminance equal to the background but differed from the background in texture contrast. The task was to judge whether the signal tokens were moving to the right or to the left. Task difficulty was varied by manipulating the proportion of tokens that moved coherently across the four frames of the stimulus display. Performance in each of the LD and CD variants of the task was measured as training proceeded. In each task, training produced substantial improvement in performance in the trained task; however, in neither case did this improvement show any significant transfer to the nontrained task. PMID:22477056

  7. Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.

    PubMed

    Alm, Magnus; Behne, Dawn

    2013-10-01

    Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.

  8. Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration.

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Hale, Sandra; Sommers, Mitchell

    2016-06-01

    In this study of visual (V-only) and audiovisual (AV) speech recognition in adults aged 22-92 years, the rate of age-related decrease in V-only performance was more than twice that in AV performance. Both auditory-only (A-only) and V-only performance were significant predictors of AV speech recognition, but age did not account for additional (unique) variance. Blurring the visual speech signal decreased speech recognition, and in AV conditions involving stimuli associated with equivalent unimodal performance for each participant, speech recognition remained constant from 22 to 92 years of age. Finally, principal components analysis revealed separate visual and auditory factors, but no evidence of an AV integration factor. Taken together, these results suggest that the benefit that comes from being able to see as well as hear a talker remains constant throughout adulthood and that changes in this AV advantage are entirely driven by age-related changes in unimodal visual and auditory speech recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Lipreading and Audiovisual Speech Recognition across the Adult Lifespan: Implications for Audiovisual Integration

    PubMed Central

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Hale, Sandra; Sommers, Mitchell

    2016-01-01

    In this study of visual (V-only) and audiovisual (AV) speech recognition in adults aged 22-92 years, the rate of age-related decrease in V-only performance was more than twice that in AV performance. Both auditory-only (A-only) and V-only performance were significant predictors of AV speech recognition, but age did not account for additional (unique) variance. Blurring the visual speech signal decreased speech recognition, and in AV conditions involving stimuli associated with equivalent unimodal performance for each participant, speech recognition remained constant from 22 to 92 years of age. Finally, principal components analysis revealed separate visual and auditory factors, but no evidence of an AV integration factor. Taken together, these results suggest that the benefit that comes from being able to see as well as hear a talker remains constant throughout adulthood, and that changes in this AV advantage are entirely driven by age-related changes in unimodal visual and auditory speech recognition. PMID:27294718

  10. Psychometric testing of the Pecka Grading Rubric for evaluating higher-order thinking in distance learning.

    PubMed

    Pecka, Shannon; Schmid, Kendra; Pozehl, Bunny

    2014-12-01

    This article describes development of the Pecka Grading Rubric (PGR) as a strategy to facilitate and evaluate students' higher-order thinking in discussion boards. The purpose of this study was to describe psychometric properties of the PGR. Rubric reliability was pilot tested on a discussion board assignment used by 15 senior student registered nurse anesthetist enrolled in an Advanced Principles of Anesthesia course. Interrater and intrarater reliabilities were tested using an interclass correlation coefficient (ICC) to evaluate absolute agreement of scoring. Raters gave each category a score, scores of the categories were summed, and a total score was calculated for the entire rubric. Interrater (ICC = 0.939, P < .001) and intrarater (ICC = 0.902 to 0.994, P < .001) reliabilities were excellent for total point scores. A content validity index was used to evaluate content validity. Raters evaluated content validity of each cell of the PGR. The content validity index (0.8-1.0) was acceptable. Known-group validity was evaluated by comparing graduate student registered nurse anesthetists (N = 7) with undergraduate senior nursing students (N = 13). Beginning evidence indicates a valid and reliable instrument that measures higher-order thinking in the student registered nurse anesthetist.

  11. Informatics in radiology: evaluation of an e-learning platform for teaching medical students competency in ordering radiologic examinations.

    PubMed

    Marshall, Nina L; Spooner, Muirne; Galvin, P Leo; Ti, Joanna P; McElvaney, N Gerald; Lee, Michael J

    2011-01-01

    A preliminary audit of orders for computed tomography was performed to evaluate the typical performance of interns ordering radiologic examinations. According to the audit, the interns showed only minimal improvement after 8 months of work experience. The online radiology ordering module (ROM) program included baseline assessment of student performance (part I), online learning with the ROM (part II), and follow-up assessment of performance with simulated ordering with the ROM (part III). A curriculum blueprint determined the content of the ROM program, with an emphasis on practical issues, including provision of logistic information, clinical details, and safety-related information. Appropriate standards were developed by a committee of experts, and detailed scoring systems were devised for assessment. The ROM program was successful in addressing practical issues in a simulated setting. In the part I assessment, the mean score for noting contraindications for contrast media was 24%; this score increased to 59% in the part III assessment (P = .004). Similarly, notification of methicillin-resistant Staphylococcus aureus status and pregnancy status and provision of referring physician contact information improved significantly. The quality of the clinical notes was stable, with good initial scores. Part III testing showed overall improvement, with the mean score increasing from 61% to 76% (P < .0001). In general, medical students lack the core knowledge that is needed for good-quality ordering of radiology services, and the experience typically afforded to interns does not address this lack of knowledge. The ROM program was a successful intervention that resulted in statistically significant improvements in the quality of radiologic examination orders, particularly with regard to logistic and radiation safety issues.

  12. Your Most Essential Audiovisual Aid--Yourself!

    ERIC Educational Resources Information Center

    Hamp-Lyons, Elizabeth

    2012-01-01

    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…

  13. Interlibrary loan of audiovisual materials in the health sciences: how a system operates in New Jersey.

    PubMed

    Crowley, C M

    1976-10-01

    An audiovisual loan program developed by the library of the College of Medicine and Dentistry of New Jersey is described. This program, supported by an NLM grant, has circulated audiovisual software from CMDNJ to libraries since 1974. Project experiences and statistics reflect the great demand for audiovisuals by health science libraries and demonstrate that a borrowing system following the pattern of traditional interlibrary loan can operate effectively and efficiently to serve these needs.

  14. Aging, audiovisual integration, and the principle of inverse effectiveness.

    PubMed

    Tye-Murray, Nancy; Sommers, Mitchell; Spehar, Brent; Myerson, Joel; Hale, Sandra

    2010-10-01

    The purpose of this investigation was to compare the ability of young and older adults to integrate auditory and visual sentence materials under conditions of good and poor signal clarity. The principle of inverse effectiveness (PoIE), which characterizes many neuronal and behavioral phenomena related to multisensory integration, asserts that as unimodal performance declines, integration is enhanced. Thus, the PoIE predicts that both young and older adults will show enhanced integration of auditory and visual speech stimuli when these stimuli are degraded. More importantly, because older adults' unimodal speech recognition skills decline in both the auditory and visual domains, the PoIE predicts that older adults will show enhanced integration during audiovisual speech recognition relative to younger adults. This study provides a test of these predictions. Fifty-three young and 53 older adults with normal hearing completed the closed-set Build-A-Sentence test and the CUNY Sentence test in a total of eight conditions; four unimodal and four audiovisual. In the unimodal conditions, stimuli were either auditory or visual and either easier or harder to perceive; the audiovisual conditions were formed from all the combinations of the unimodal signals. The hard visual signals were created by degrading video contrast, and the hard auditory signals were created by decreasing the signal to noise ratio. Scores from the unimodal and bimodal conditions were used to compute auditory enhancement and integration enhancement measures. Contrary to the PoIE, neither the auditory enhancement nor integration enhancement measures increased when signal clarity in the auditory or visual channel of audiovisual speech stimuli was decreased, nor was either measure higher for older adults than for young adults. In audiovisual conditions with easy visual stimuli, the integration enhancement measure for older adults was equivalent to that for young adults. However, in conditions with hard

  15. Neural development of networks for audiovisual speech comprehension.

    PubMed

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L

    2010-08-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the neurobiological substrate in the child compares to the adult is unknown. In particular, developmental differences in the network for audiovisual speech comprehension could manifest through the incorporation of additional brain regions, or through different patterns of effective connectivity. In the present study we used functional magnetic resonance imaging and structural equation modeling (SEM) to characterize the developmental changes in network interactions for audiovisual speech comprehension. The brain response was recorded while children 8- to 11-years-old and adults passively listened to stories under audiovisual (AV) and auditory-only (A) conditions. Results showed that in children and adults, AV comprehension activated the same fronto-temporo-parietal network of regions known for their contribution to speech production and perception. However, the SEM network analysis revealed age-related differences in the functional interactions among these regions. In particular, the influence of the posterior inferior frontal gyrus/ventral premotor cortex on supramarginal gyrus differed across age groups during AV, but not A speech. This functional pathway might be important for relating motor and sensory information used by the listener to identify speech sounds. Further, its development might reflect changes in the mechanisms that relate visual speech information to articulatory speech representations through experience producing and perceiving speech. 2009 Elsevier Inc. All rights reserved.

  16. The role of emotion in dynamic audiovisual integration of faces and voices.

    PubMed

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. The role of emotion in dynamic audiovisual integration of faces and voices

    PubMed Central

    Kotz, Sonja A.; Tavano, Alessandro; Schröger, Erich

    2015-01-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. PMID:25147273

  18. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., including captions and published and unpublished catalogs, inventories, indexes, and production files and... documentation identifying creators of audiovisual products, their precise relationship to the agency, and the...

  19. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., including captions and published and unpublished catalogs, inventories, indexes, and production files and... documentation identifying creators of audiovisual products, their precise relationship to the agency, and the...

  20. Primary and multisensory cortical activity is correlated with audiovisual percepts.

    PubMed

    Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven

    2010-04-01

    Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion.

  1. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  2. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    PubMed

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.

  3. Audiovisual integration of speech in a patient with Broca's Aphasia

    PubMed Central

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  4. Audiovisual Delay as a Novel Cue to Visual Distance

    PubMed Central

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R.; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance. PMID:26509795

  5. Event Related Potentials Index Rapid Recalibration to Audiovisual Temporal Asynchrony

    PubMed Central

    Simon, David M.; Noel, Jean-Paul; Wallace, Mark T.

    2017-01-01

    Asynchronous arrival of multisensory information at the periphery is a ubiquitous property of signals in the natural environment due to differences in the propagation time of light and sound. Rapid adaptation to these asynchronies is crucial for the appropriate integration of these multisensory signals, which in turn is a fundamental neurobiological process in creating a coherent perceptual representation of our dynamic world. Indeed, multisensory temporal recalibration has been shown to occur at the single trial level, yet the mechanistic basis of this rapid adaptation is unknown. Here, we investigated the neural basis of rapid recalibration to audiovisual temporal asynchrony in human participants using a combination of psychophysics and electroencephalography (EEG). Consistent with previous reports, participant’s perception of audiovisual temporal synchrony on a given trial (t) was influenced by the temporal structure of stimuli on the previous trial (t−1). When examined physiologically, event related potentials (ERPs) were found to be modulated by the temporal structure of the previous trial, manifesting as late differences (>125 ms post second-stimulus onset) in central and parietal positivity on trials with large stimulus onset asynchronies (SOAs). These findings indicate that single trial adaptation to audiovisual temporal asynchrony is reflected in modulations of late evoked components that have previously been linked to stimulus evaluation and decision-making. PMID:28381993

  6. Audiovisual integration of emotional signals from others' social interactions

    PubMed Central

    Piwek, Lukasz; Pollick, Frank; Petrini, Karin

    2015-01-01

    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity. PMID:26005430

  7. Infants' preference for native audiovisual speech dissociated from congruency preference.

    PubMed

    Shaw, Kathleen; Baart, Martijn; Depowski, Nicole; Bortfeld, Heather

    2015-01-01

    Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  8. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  9. Musical expertise induces audiovisual integration of abstract congruency rules.

    PubMed

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Pantev, Christo

    2012-12-12

    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-"the higher the pitch of the tone, the higher the position of the circle"-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to non-musicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses.

  10. The development of the perception of audiovisual simultaneity.

    PubMed

    Chen, Yi-Chuan; Shore, David I; Lewis, Terri L; Maurer, Daphne

    2016-06-01

    We measured the typical developmental trajectory of the window of audiovisual simultaneity by testing four age groups of children (5, 7, 9, and 11 years) and adults. We presented a visual flash and an auditory noise burst at various stimulus onset asynchronies (SOAs) and asked participants to report whether the two stimuli were presented at the same time. Compared with adults, children aged 5 and 7 years made more simultaneous responses when the SOAs were beyond ± 200 ms but made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity was located at the visual-leading side, as in adults, by 5 years of age, the youngest age tested. However, the window of audiovisual simultaneity became narrower and response errors decreased with age, reaching adult levels by 9 years of age. Experiment 2 ruled out the possibility that the adult-like performance of 9-year-old children was caused by the testing of a wide range of SOAs. Together, the results demonstrate that the adult-like precision of perceiving audiovisual simultaneity is developed by 9 years of age, the youngest age that has been reported to date. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Audiovisual communication and therapeutic jurisprudence: Cognitive and social psychological dimensions.

    PubMed

    Feigenson, Neal

    2010-01-01

    The effects of audiovisual communications on the emotional and psychological well-being of participants in the legal system have not been previously examined. Using as a framework for analysis what Slobogin (1996) calls internal balancing (of therapeutic versus antitherapeutic effects) and external balancing (of therapeutic jurisprudence [TJ] effects versus effects on other legal values), this brief paper discusses three examples that suggest the complexity of evaluating courtroom audiovisuals in TJ terms. In each instance, audiovisual displays that are admissible based on their arguable probative or explanatory value - day-in-the-life movies, victim impact videos, and computer simulations of litigated events - might well reduce stress and thus improve the psychological well-being of personal injury plaintiffs, survivors, and jurors, respectively. In each situation, however, other emotional and cognitive effects may prove antitherapeutic for the target or other participants, and/or may undermine other important values including outcome accuracy, fairness, and even the conception of the legal decision maker as a moral actor.

  12. What Makes the Difference? Teachers Explore What Must be Taught and What Must be Learned in Order to Understand the Particulate Character of Matter

    NASA Astrophysics Data System (ADS)

    Vikström, Anna

    2014-10-01

    The concept of matter, especially its particulate nature, is acknowledged as being one of the key concept areas in learning science. Within the framework of learning studies and variation theory, and with results from science education research as a starting point, six lower secondary school science teachers tried to enhance students' learning by exploring what must be learnt in order to understand the concept in specific way. It was found that variation theory was a useful guiding principle when teachers are engaged in pedagogical design, analysis of lessons, and evaluation of students learning, as well as a valuable tool for adapting research results into practice.

  13. Creation and validation of web-based food allergy audiovisual educational materials for caregivers.

    PubMed

    Rosen, Jamie; Albin, Stephanie; Sicherer, Scott H

    2014-01-01

    Studies reveal deficits in caregivers' ability to prevent and treat food-allergic reactions with epinephrine and a consumer preference for validated educational materials in audiovisual formats. This study was designed to create brief, validated educational videos on food allergen avoidance and emergency management of anaphylaxis for caregivers of children with food allergy. The study used a stepwise iterative process including creation of a needs assessment survey consisting of 25 queries administered to caregivers and food allergy experts to identify curriculum content. Preliminary videos were drafted, reviewed, and revised based on knowledge and satisfaction surveys given to another cohort of caregivers and health care professionals. The final materials were tested for validation of their educational impact and user satisfaction using pre- and postknowledge tests and satisfaction surveys administered to a convenience sample of 50 caretakers who had not participated in the development stages. The needs assessment identified topics of importance including treatment of allergic reactions and food allergen avoidance. Caregivers in the final validation included mothers (76%), fathers (22%), and other caregivers (2%). Race/ethnicity were white (66%), black (12%), Asian (12%), Hispanic (8%), and other (2%). Knowledge tests (maximum score = 18) increased from a mean score of 12.4 preprogram to 16.7 postprogram (p < 0.0001). On a 7-point Likert scale, all satisfaction categories remained above a favorable mean score of 6, indicating participants were overall very satisfied, learned a lot, and found the materials to be informative, straightforward, helpful, and interesting. This web-based audiovisual curriculum on food allergy improved knowledge scores and was well received.

  14. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    PubMed

    Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.

  15. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    PubMed Central

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  16. GLOOTT Model: A Pedagogically-Enriched Design Framework of Learning Environment to Improve Higher Order Thinking Skills

    ERIC Educational Resources Information Center

    Tan, Wee Chuen; Aris, Baharuddin; Abu, Mohd Salleh

    2006-01-01

    Learning object design currently leads the instructional technologist towards more effective instructional design, development, and delivery of learning content. There is a considerable amount of literature discussing the potential use of learning object in e-learning. However, most of the works were mainly focused on the standard forms of…

  17. Employing Transformative Learning Theory in the Design and Implementation of a Curriculum for Court-Ordered Participants in a Parent Education Class

    ERIC Educational Resources Information Center

    Taylor, Mariann B.; Hill, Lilian H.

    2016-01-01

    This study sought to analyze the experiences of participants in court-ordered parent education with the ultimate goal to identify a framework, which promotes learning that is transformative. Participants included 11 parents court ordered to attend parent education classes through the Department of Human Services. A basic qualitative design, which…

  18. Management of high-order multiple births: application of lessons learned because of participation in Vermont Oxford Network collaboratives.

    PubMed

    Kantak, Anand D; Grow, Jennifer L; Ohlinger, Judy; Adams, Heather J; Knupp, Amy M; Lavin, Justin P

    2006-11-01

    The delivery and care of sextuplets is complex. Potentially better practices that were developed as part of the Vermont Oxford Network improvement collaboratives were used to prepare for a sextuplet delivery at Akron Children's Hospital. The team used potentially better practices that were learned from the Neonatal Intensive Care Quality Improvement Collaborative 2002 using multidisciplinary teams. There was extensive media coverage of the delivery. The goal was to use nearly all potentially better practices that focused on the goals of reducing nosocomial infection, reducing chronic lung disease, reducing radiograph use, reducing length of stay, reducing blood gas use, promoting nutrition, reducing intraventricular hemorrhage, and enriching family-centered care. The center aimed to use these 97 potentially better practices. Of the 97 possible potential better practices as set by the Neonatal Intensive Care Quality Improvement Collaborative 2002, 96 (99%) were used. This is a blueprint that any center that is faced with high-order multiple births could use as a reference point to begin planning. The team created a benchmark to achieve in every birth of very low birth weight infants and not just a special situation of high-order multiple births.

  19. Higher-Order Sensory Cortex Drives Basolateral Amygdala Activity during the Recall of Remote, but Not Recently Learned Fearful Memories.

    PubMed

    Cambiaghi, Marco; Grosso, Anna; Likhtik, Ekaterina; Mazziotti, Raffaele; Concina, Giulia; Renna, Annamaria; Sacco, Tiziana; Gordon, Joshua A; Sacchetti, Benedetto

    2016-02-03

    Negative experiences are quickly learned and long remembered. Key unresolved issues in the field of emotional memory include identifying the loci and dynamics of memory storage and retrieval. The present study examined neural activity in the higher-order auditory cortex Te2 and basolateral amygdala (BLA) and their crosstalk during the recall of recent and remote fear memories. To this end, we obtained local field potentials and multiunit activity recordings in Te2 and BLA of rats that underwent recall at 24 h and 30 d after the association of an acoustic conditioned (CS, tone) and an aversive unconditioned stimulus (US, electric shock). Here we show that, during the recall of remote auditory threat memories in rats, the activity of the Te2 and BLA is highly synchronized in the theta frequency range. This functional connectivity stems from memory consolidation processes because it is present during remote, but not recent, memory retrieval. Moreover, the observed increase in synchrony is cue and region specific. A preponderant Te2-to-BLA directionality characterizes this dialogue, and the percentage of time Te2 theta leads the BLA during remote memory recall correlates with a faster latency to freeze to the auditory conditioned stimulus. The blockade of this information transfer via Te2 inhibition with muscimol prevents any retrieval-evoked neuronal activity in the BLA and animals are unable to retrieve remote memories. We conclude that memories stored in higher-order sensory cortices drive BLA activity when distinguishing between learned threatening and neutral stimuli. How and where in the brain do we store the affective/motivational significance of sensory stimuli acquired through life experiences? Scientists have long investigated how "limbic" structures, such as the amygdala, process affective stimuli. Here we show that retrieval of well-established threat memories requires the functional interplay between higher-order components of the auditory cortex and the

  20. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or.... (c) If NARA determines that a USIA audiovisual record prepared for dissemination abroad may have...