Audio-video decision support for patients: the documentary genré as a basis for decision aids.
Volandes, Angelo E; Barry, Michael J; Wood, Fiona; Elwyn, Glyn
2013-09-01
Decision support tools are increasingly using audio-visual materials. However, disagreement exists about the use of audio-visual materials as they may be subjective and biased. This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio-visual materials. Three concerns arising from documentary film studies as they apply to the use of audio-visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio-visual materials (selection bias) and how to ensure objectivity (editorial bias). Decision science needs to start a debate about how audio-visual materials are to be used in decision support tools. Simply because audio-visual materials may be subjective and open to bias does not mean that we should not use them. Methods need to be found to ensure consensus around balance and editorial control, such that audio-visual materials can be used. © 2011 John Wiley & Sons Ltd.
22 CFR 61.3 - Certification and authentication criteria.
Code of Federal Regulations, 2014 CFR
2014-04-01
... AUDIO-VISUAL MATERIALS § 61.3 Certification and authentication criteria. (a) The Department shall certify or authenticate audio-visual materials submitted for review as educational, scientific and... of the material. (b) The Department will not certify or authenticate any audio-visual material...
22 CFR 61.3 - Certification and authentication criteria.
Code of Federal Regulations, 2013 CFR
2013-04-01
... AUDIO-VISUAL MATERIALS § 61.3 Certification and authentication criteria. (a) The Department shall certify or authenticate audio-visual materials submitted for review as educational, scientific and... of the material. (b) The Department will not certify or authenticate any audio-visual material...
22 CFR 61.3 - Certification and authentication criteria.
Code of Federal Regulations, 2012 CFR
2012-04-01
... AUDIO-VISUAL MATERIALS § 61.3 Certification and authentication criteria. (a) The Department shall certify or authenticate audio-visual materials submitted for review as educational, scientific and... of the material. (b) The Department will not certify or authenticate any audio-visual material...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2014 CFR
2014-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2013 CFR
2013-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2011 CFR
2011-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2010 CFR
2010-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2012 CFR
2012-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
Code of Federal Regulations, 2014 CFR
2014-04-01
... DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.1... educational, scientific and cultural audio-visual materials between nations by providing favorable import... issuance or authentication of a certificate that the audio-visual material for which favorable treatment is...
Code of Federal Regulations, 2012 CFR
2012-04-01
... DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.1... educational, scientific and cultural audio-visual materials between nations by providing favorable import... issuance or authentication of a certificate that the audio-visual material for which favorable treatment is...
Code of Federal Regulations, 2013 CFR
2013-04-01
... DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.1... educational, scientific and cultural audio-visual materials between nations by providing favorable import... issuance or authentication of a certificate that the audio-visual material for which favorable treatment is...
Code of Federal Regulations, 2014 CFR
2014-04-01
... Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS... certification of United States produced audio-visual materials under the provisions of the Beirut Agreement... staff with authority to issue Certificates or Importation Documents. Audio-visual materials—means: (1...
Code of Federal Regulations, 2013 CFR
2013-04-01
... Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS... certification of United States produced audio-visual materials under the provisions of the Beirut Agreement... staff with authority to issue Certificates or Importation Documents. Audio-visual materials—means: (1...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS... certification of United States produced audio-visual materials under the provisions of the Beirut Agreement... staff with authority to issue Certificates or Importation Documents. Audio-visual materials—means: (1...
ERIC Educational Resources Information Center
Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.
This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…
Selected Audio-Visual Materials for Consumer Education. [New Version.
ERIC Educational Resources Information Center
Johnston, William L.
Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…
22 CFR 61.5 - Authentication procedures-Imports.
Code of Federal Regulations, 2012 CFR
2012-04-01
... AUDIO-VISUAL MATERIALS § 61.5 Authentication procedures—Imports. (a) Applicants seeking Department authentication of foreign produced audio-visual materials shall submit to the Department a bona fide foreign...
22 CFR 61.5 - Authentication procedures-Imports.
Code of Federal Regulations, 2013 CFR
2013-04-01
... AUDIO-VISUAL MATERIALS § 61.5 Authentication procedures—Imports. (a) Applicants seeking Department authentication of foreign produced audio-visual materials shall submit to the Department a bona fide foreign...
22 CFR 61.5 - Authentication procedures-Imports.
Code of Federal Regulations, 2014 CFR
2014-04-01
... AUDIO-VISUAL MATERIALS § 61.5 Authentication procedures—Imports. (a) Applicants seeking Department authentication of foreign produced audio-visual materials shall submit to the Department a bona fide foreign...
22 CFR 61.4 - Certification procedures-Exports.
Code of Federal Regulations, 2014 CFR
2014-04-01
... AUDIO-VISUAL MATERIALS § 61.4 Certification procedures—Exports. (a) Applicants seeking certification of U.S. produced audio-visual materials shall submit to the Department a completed Application Form for...
22 CFR 61.4 - Certification procedures-Exports.
Code of Federal Regulations, 2013 CFR
2013-04-01
... AUDIO-VISUAL MATERIALS § 61.4 Certification procedures—Exports. (a) Applicants seeking certification of U.S. produced audio-visual materials shall submit to the Department a completed Application Form for...
22 CFR 61.4 - Certification procedures-Exports.
Code of Federal Regulations, 2012 CFR
2012-04-01
... AUDIO-VISUAL MATERIALS § 61.4 Certification procedures—Exports. (a) Applicants seeking certification of U.S. produced audio-visual materials shall submit to the Department a completed Application Form for...
Catalog: Wilmington College Peace Resource Center. Revised Edition.
ERIC Educational Resources Information Center
Wilmington Coll., OH. Peace Resource Center.
A bibliography of low-cost peace education resources for individuals and organizations, this catalogue lists audio-visual materials, archival materials, and books. The audio-visual materials and the books are grouped into some or all of the following categories: atomic bombings, nuclear war, the arms race, anti-war, civil defense, peace education,…
Audio-Visual Materials in Adult Consumer Education: An Annotated Bibliography.
ERIC Educational Resources Information Center
Forgue, Raymond E.; And Others
Designed to provide a quick but thorough reference for consumer educators of adults to use when choosing audio-visual materials, this annotated bibliography includes eighty-five titles from the currently available 1,500 films, slidesets, cassettes, records, and transparencies. (Materials were rejected because they were out-of-date; not relevant to…
An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.
ERIC Educational Resources Information Center
Albert, Richard N.
Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…
Code of Federal Regulations, 2010 CFR
2010-04-01
... Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.2 Definitions. Department—means the Department of State. Applicant— means: (1) The United States... certification of United States produced audio-visual materials under the provisions of the Beirut Agreement...
ERIC Educational Resources Information Center
San Francisco Unified School District, CA.
This is a selected bibliography of some good and some outstanding audio-visual educational materials in the library of the Educational Materials Bureau, Audio-Visual Education Section, that may be considered by particular interest in the study of black Americans. The bibliography is arranged alphabetically within these subject areas: I. African…
Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi
2015-04-01
This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Definitions. 61.2 Section 61.2 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS... IA-862). Basic rights—means the world-wide non-restrictive ownership rights in audio-visual materials...
ERIC Educational Resources Information Center
San Francisco Unified School District, CA.
This is a selected bibliography of some good and some outstanding audio-visual educational materials in the library of the Educational Materials Bureau, Audio-Visual Education Section, that may be considered of particular interest in the study of Asians and Asian-Americans. The bibliography is arranged alphabetically within the following subject…
Spanish Heritage and Influence in the Western Hemisphere.
ERIC Educational Resources Information Center
San Francisco Unified School District, CA.
This is a selected bibliography of some good and some outstanding audio-visual educational materials in the library of the Educational Materials Bureau, Audio-Visual Educational Section, that may be considered of particular interest in the study of Spanish heritage and influence in the Western Hemisphere. The bibliography is arranged…
ERIC Educational Resources Information Center
Ellington, Henry
A sequel to the booklet "A Review of the Different Types of Instructional Materials Available to Teachers and Lecturers," this booklet begins by looking at the various ways in which linked audio and still visual materials can be used in different instructional situations, i.e., mass instruction, individualized learning, and group learning. Some of…
ERIC Educational Resources Information Center
Imani, Sahar Sadat Afshar
2013-01-01
Modular EFL Educational Program has managed to offer specialized language education in two specific fields: Audio-visual Materials Translation and Translation of Deeds and Documents. However, no explicit empirical studies can be traced on both internal and external validity measures as well as the extent of compatibility of both courses with the…
Guidelines for the Production of Audio Materials for Print Handicapped Readers.
ERIC Educational Resources Information Center
National Library of Australia, Canberra.
Procedural guidelines developed by the Audio Standards Committee of the National Library of Australia to help improve the overall quality of production of audio materials for visually handicapped readers are presented. This report covers the following areas: selection of narrators and the narration itself; copyright; recording of books, magazines,…
ERIC Educational Resources Information Center
Radel, David
This paper provides an inventory and summary of current and planned international information clearing house services in the field of population/family planning, worldwide. Special emphasis is placed on services relating to audio-visual aids, educational materials, and information/education/communication support, as these items and activities have…
ERIC Educational Resources Information Center
Kirkwood, Adrian
The first of two papers in this report, "The Present and the Future of Audio-Visual Production Centres in Distance Universities," describes changes in the Open University in Great Britain. The Open University's use of television and audio materials are increasingly being distributed to students on cassette. Although transmission is still…
ERIC Educational Resources Information Center
Bello, S.; Goni, Umar
2016-01-01
This is a survey study, designed to determine the relationship between audio-visual materials and environmental factors on students' academic performance in Senior Secondary Schools in Borno State: Implications for Counselling. The study set two research objectives, and tested two research hypotheses. The population of this study is 1,987 students…
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.
2006-07-01
Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less
ERIC Educational Resources Information Center
Yuan, Yifeng; Shen, Huizhong
2016-01-01
This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…
ERIC Educational Resources Information Center
Moon, Donald K.
This document is one in a series of reports which reviews instructional materials and equipment and offers suggestions about how to select equipment. Topics discussed include: (1) the general criteria for audio-visual equipment selection such as performance, safety, comparability, sturdiness and repairability; and (2) specific equipment criteria…
Focus on Hinduism: Audio-Visual Resources for Teaching Religion. Occasional Publication No. 23.
ERIC Educational Resources Information Center
Dell, David; And Others
The guide presents annotated lists of audio and visual materials about the Hindu religion. The authors point out that Hinduism cannot be comprehended totally by reading books; thus the resources identified in this guide will enhance understanding based on reading. The guide is intended for use by high school and college students, teachers,…
ERIC Educational Resources Information Center
Cooper, William
The material presented here is the result of a review of the Technical Development Plan of the National Library of Medicine, made with the object of describing the role of audiovisual materials in medical education, research and service, and particularly in the continuing education of physicians and allied health personnel. A historical background…
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Challenges of Using Audio-Visual Aids as Warm-Up Activity in Teaching Aviation English
ERIC Educational Resources Information Center
Sahin, Mehmet; Sule, St.; Seçer, Y. E.
2016-01-01
This study aims to find out the challenges encountered in the use of video as audio-visual material as a warm-up activity in aviation English course at high school level. This study is based on a qualitative study in which focus group interview is used as the data collection procedure. The participants of focus group are four instructors teaching…
Effects of audio-visual presentation of target words in word translation training
NASA Astrophysics Data System (ADS)
Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko
2004-05-01
Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.
The Best Colors for Audio-Visual Materials for More Effective Instruction.
ERIC Educational Resources Information Center
Start, Jay
A number of variables may affect the ability of students to perceive, and learn from, instructional materials. The objectives of the study presented here were to determine the projected color that provided the best visual acuity for the viewer, and the necessary minimum exposure time for achieving maximum visual acuity. Fifty…
Schwartz, Jean-Luc; Savariaux, Christophe
2014-01-01
An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call “preparatory gestures”. However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call “comodulatory gestures” providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction. PMID:25079216
ERIC Educational Resources Information Center
Drood, Pooya; Asl, Hanieh Davatgari
2016-01-01
The ways in which task in classrooms has developed and proceeded have receive great attention in the field of language teaching and learning in the sense that they draw attention of learners to the competing features such as accuracy, fluency, and complexity. English audiovisual and audio recorded materials have been widely used by teachers and…
Grigsby, Timothy J; Unger, Jennifer B; Molina, Gregory B; Baron, Mel
2017-01-01
Dementia is a clinical syndrome characterized by progressive degeneration in cognitive ability that limits the capacity for independent living. Interventions are needed to target the medical, social, psychological, and knowledge needs of caregivers and patients. This study used a mixed methods approach to evaluate the effectiveness of a dementia novela presented in an audio-visual format in improving dementia attitudes, beliefs and knowledge. Adults from Los Angeles (N = 42, 83% female, 90% Hispanic/Latino, mean age = 42.2 years, 41.5% with less than a high school education) viewed an audio-visual novela on dementia. Participants completed surveys immediately before and after viewing the material. The novela produced significant improvements in overall knowledge (t(41) = -9.79, p < .0001) and led to positive increases in specific attitudes toward people with dementia but not in beliefs that screening would be beneficial. Qualitative results provided concordant and discordant evidence for the quantitative findings. Results indicate that an audio-visual novela can be useful for improving attitudes and knowledge about dementia, but further work is needed to investigate the relation with health disparities in screening and treatment behaviors. Audio visual novelas are an innovative format for health education and change attitudes and knowledge about dementia.
Audio-visual interactions in environment assessment.
Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata
2015-08-01
The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
ERIC Educational Resources Information Center
Beauchamp, Darrell G.; And Others
This document contains selected conference papers all relating to visual literacy. The topics include: process issues in visual literacy; interpreting visual statements; what teachers need to know; multimedia presentations; distance education materials for correctional use; visual culture; audio-visual interaction in desktop multimedia; the…
The priming function of in-car audio instruction.
Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh
2018-05-01
Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.
Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549
Electrophysiological evidence for Audio-visuo-lingual speech integration.
Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc
2018-01-31
Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Video-assisted segmentation of speech and audio track
NASA Astrophysics Data System (ADS)
Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.
1999-08-01
Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.
Effects of Text Modality in Multimedia Presentations on Written and Oral Performance
ERIC Educational Resources Information Center
Broek, G. S. E.; Segers, E.; Verhoeven, L.
2014-01-01
A common assumption in multimedia design is that audio-visual materials with pictures and spoken narrations lead to better learning outcomes than visual-only materials with pictures and on-screen text. The present study questions the generalizability of this modality effect. We explored how modality effects change over time, taking into account…
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
Effects of aging on audio-visual speech integration.
Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric
2014-10-01
This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., utilities, and built-in equipment, and any necessary enclosures or structures to house the machinery... necessary furniture; printed, published and audio-visual instructional materials; telecommunications...
47 CFR 87.483 - Audio visual warning systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Audio visual warning systems. 87.483 Section 87... AVIATION SERVICES Stations in the Radiodetermination Service § 87.483 Audio visual warning systems. An audio visual warning system (AVWS) is a radar-based obstacle avoidance system. AVWS activates...
Bibliography of Multi-Ethnic and Sex-Fair Resource Materials.
ERIC Educational Resources Information Center
Massachusetts State Dept. of Education, Boston. Bureau of Equal Educational Opportunities.
This annotated bibliography lists both nondiscriminatory instructional materials (largely audio-visual) for classroom use and works for teachers' use that promote multi-ethnic and sex fair education. The materials listed include films, filmstrips, slide presentations and video tapes, bibliographies of curriculum materials, books, handbooks and…
[Intermodal timing cues for audio-visual speech recognition].
Hashimoto, Masahiro; Kumashiro, Masaharu
2004-06-01
The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.
Rosemann, Stephanie; Thiel, Christiane M
2018-07-15
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.
Design and Usability Testing of an Audio Platform Game for Players with Visual Impairments
ERIC Educational Resources Information Center
Oren, Michael; Harding, Chris; Bonebright, Terri L.
2008-01-01
This article reports on the evaluation of a novel audio platform game that creates a spatial, interactive experience via audio cues. A pilot study with players with visual impairments, and usability testing comparing the visual and audio game versions using both sighted players and players with visual impairments, revealed that all the…
Linguistic experience and audio-visual perception of non-native fricatives.
Wang, Yue; Behne, Dawn M; Jiang, Haisheng
2008-09-01
This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure [long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain.
Selections from Literacy Materials in Asia and the Pacific.
ERIC Educational Resources Information Center
Asian Cultural Centre for UNESCO, Tokyo (Japan).
As part of a project in the developing nations of the Asian Pacific region to promote the use and improvement of newly acquired literacy skills, exemplary reading-related materials for this population were gathered from a number of countries. A selection of posters, booklets, audio-visual materials, games, and other printed material are presented…
Culture through Comparison: Creating Audio-Visual Listening Materials for a CLIL Course
ERIC Educational Resources Information Center
Zhyrun, Iryna
2016-01-01
Authentic listening has become a part of CLIL materials, but it can be difficult to find listening materials that perfectly match the language level, length requirements, content, and cultural context of a course. The difficulty of finding appropriate materials online, financial limitations posed by copyright fees, and necessity to produce…
ERIC Educational Resources Information Center
Carrier, Anne; And Others
1986-01-01
Provides an annotated list of instructional resources for teaching about the Holocaust in secondary schools. Included are textbooks, multi-disciplinary units, eyewitness accounts, fiction, poetry, anthologies, and audio visual materials. (JDH)
Selected Audio-Visual Materials for Consumer Education.
ERIC Educational Resources Information Center
Oppenheim, Irene
This monograph provides an annotated listing of suggested audiovisual materials which teachers should consider as they plan consumer education programs. The materials are divided into a general section on consumer education and a section on specific topics, such as credit, decision making, health, insurance, money management, and others. The…
Robot Command Interface Using an Audio-Visual Speech Recognition System
NASA Astrophysics Data System (ADS)
Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy
In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.
The VTLA System of Course Delivery and Faculty Development in Materials Education
NASA Technical Reports Server (NTRS)
Berrettini, Robert; Roy, Rustum
1996-01-01
There is a national need for high-quality, upper division courses that address critical topics in materials synthesis, particularly those beyond the present expertise of the typical university department's faculty. A new project has been started to test a novel distance education and faculty development system, called Video Tape Live Audio (VTLA). This, if successful, would at once enlarge the national Materials Science and Engineering (MSE) student cohort studying material synthesis and develop faculty expertise at the receiving sites. The mechanics for the VTLA scheme are as follows: A course is designed in the field selected for emphasis and for which there is likely to be considerable demand, in this example 'Ceramic Materials Synthesis: Theory and Case Studies'. One of the very best researcher/teachers records lectures of TV studio quality with appropriate visuals. Universities and colleges which wish to offer the course agree to offer it at the same hour at least once a week. The videotaped lectures and accompanying text, readings and visuals are shipped to the professor in charge, who has an appropriate background. The professor arranges the classroom TV presentation equipment and supervises the course. Video lectures are played during regular course hours twice a week with time for discussion by the supervising professor. Typically the third weekly classroom period is scheduled by all sites at a common designated hour, during which the course author/presenter answers questions, provides greater depth, etc. on a live audio link to all course sites. Questions are submitted by fax and e-mail prior to the audio tutorial. coordinating professors at various sites have separate audio teleconferences at the beginning and end of the course, dealing with the philosophical and pedagogical approach to the course, content and mechanics. Following service once or twice as an 'apprentice' to the course, the coordinating professors may then offer it without the necessity of the live audio tutorial.
ERIC Educational Resources Information Center
Di Francesco, Loretta; Smith, Philip D., Jr.
1971-01-01
This evaluation of two programs of materials used in introductory French classes tests two basic hypotheses: (1) pretests are good predictors of subsequent French achievement at the junior high school level, and (2) students in different programs will achieve to the same degree on a final French test. Results of the groups using the "audiolingual"…
Selected Mental Health Audiovisuals.
ERIC Educational Resources Information Center
National Inst. of Mental Health (DHEW), Rockville, MD.
Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMMITTEE FOR PURCHASE FROM PEOPLE WHO ARE BLIND OR SEVERELY DISABLED 8-PUBLIC AVAILABILITY OF AGENCY... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable...
Code of Federal Regulations, 2011 CFR
2011-07-01
... COMMITTEE FOR PURCHASE FROM PEOPLE WHO ARE BLIND OR SEVERELY DISABLED 8-PUBLIC AVAILABILITY OF AGENCY... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable...
CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset
Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738
Educating for Peace: A Resource Guide for Teachers and Community Workers.
ERIC Educational Resources Information Center
Educating for Peace, Ottawa (Ontario).
This resource guide provides educators and community workers with a listing of written materials, audio-visual materials, and Ottawa-Carleton (Canada) area speakers dealing with peace education. The first of three parts lists 27 books, kits, and curriculum materials. For each listing, appropriate grade level, annotation, ordering address, and…
Code of Federal Regulations, 2013 CFR
2013-01-01
... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...
Code of Federal Regulations, 2011 CFR
2011-01-01
... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...
Code of Federal Regulations, 2012 CFR
2012-01-01
... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...
7 CFR 47.14 - Prehearing conferences.
Code of Federal Regulations, 2012 CFR
2012-01-01
... determines that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent.... If the examiner determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the examiner...
Code of Federal Regulations, 2014 CFR
2014-01-01
... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...
Code of Federal Regulations, 2012 CFR
2012-01-01
... which the deposition is to be conducted (telephone, audio-visual telecommunication, or by personal...) The place of the deposition; (iii) The manner of the deposition (telephone, audio-visual... shall be conducted in the manner (telephone, audio-visual telecommunication, or personal attendance of...
Code of Federal Regulations, 2010 CFR
2010-01-01
... that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice.... If the Judge determines that a conference conducted by audio-visual telecommunication would... correspondence, the conference shall be conducted by audio-visual telecommunication unless the Judge determines...
Power saver circuit for audio/visual signal unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Right, R. W.
1985-02-12
A combined audio and visual signal unit with the audio and visual components actuated alternately and powered over a single cable pair in such a manner that only one of the audio and visual components is drawing power from the power supply at any given instant. Thus, the power supply is never called upon to provide more energy than that drawn by the one of the components having the greater power requirement. This is particularly advantageous when several combined audio and visual signal units are coupled in parallel on one cable pair. Typically, the signal unit may comprise a hornmore » and a strobe light for a fire alarm signalling system.« less
Code of Federal Regulations, 2014 CFR
2014-07-01
... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable materials (e.g., magnetic tape or disk), among others. (g) The term search includes all time spent looking... time spent resolving general legal or policy issues regarding the application of exemptions. [54 FR...
Code of Federal Regulations, 2012 CFR
2012-07-01
... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable materials (e.g., magnetic tape or disk), among others. (g) The term search includes all time spent looking... time spent resolving general legal or policy issues regarding the application of exemptions. [54 FR...
Multinational Exchange Mechanisms of Educational Audio-Visual Materials. Appendixes.
ERIC Educational Resources Information Center
Center of Studies and Realizations for Permanent Education, Paris (France).
These appendixes contain detailed information about the existing audiovisual material exchanges which served as the basis for the analysis contained in the companion report. Descriptions of the objectives, structure, financing and services of the following national and international organizations are included: (1) Educational Resources Information…
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.1 Purpose. The Department of State administers the “Beirut Agreement of 1948”, a multinational treaty... Material of an Educational, Scientific and Cultural Character. This Agreement facilitates the free flow of...
Rosser, James C; Fleming, Jeffrey P; Legare, Timothy B; Choi, Katherine M; Nakagiri, Jamie; Griffith, Elliot
2017-12-22
To design and develop a distance learning (DL) system for the transference of laparoscopic surgery knowledge and skill constructed from off-the-shelf materials and commercially available software. Minimally invasive surgery offers significant benefits over traditional surgical procedures, but adoption rates for many procedures are low. Skill and confidence deficits are two of the culprits. DL combined with simulation training and telementoring may address these issues with scale. The system must be built to meet the instruction requirements of a proven laparoscopic skills course (Top Gun). Thus, the rapid sharing of multimedia educational materials, secure two-way audio/visual communications, and annotation and recording capabilities are requirements for success. These requirements are more in line with telementoring missions than standard distance learning efforts. A DL system with telementor, classroom, and laboratory stations was created. The telementor station consists of a desktop computer and headset with microphone. For the classroom station, a laptop is connected to a digital projector that displays the remote instructor and content. A tripod-mounted webcam provides classroom visualization and a Bluetooth® wireless speaker establishes audio. For the laboratory station, a laptop with universal serial bus (USB) expander is combined with a tabletop laparoscopic skills trainer, a headset with microphone, two webcams and a Bluetooth® speaker. The cameras are mounted on a standard tripod and an adjustable gooseneck camera mount clamp to provide an internal and external view of the training area. Internet meeting software provides audio/visual communications including transmission of educational materials. A DL system was created using off-the-shelf materials and commercially available software. It will allow investigations to evaluate the effectiveness of laparoscopic surgery knowledge and skill transfer utilizing DL techniques.
Annotated Bibliography on Apartheid.
ERIC Educational Resources Information Center
Totten, Sam, ed.
1985-01-01
This annotated listing on apartheid in South Africa cites general resources, classroom materials, fiction, poetry, audio visuals, and organizations and associations. Also included are a glossary and a brief chronology of South Africa's apartheid system. (RM)
An Audio-Visual Approach to Training
ERIC Educational Resources Information Center
Hearnshaw, Trevor
1977-01-01
Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)
Meyerhoff, Hauke S; Huff, Markus
2016-04-01
Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (telephone, audio-visual telecommunication, or personal attendance of those who are to participate in the... that conducting the deposition by audio-visual telecommunication: (i) Is necessary to prevent prejudice... determines that a deposition conducted by audio-visual telecommunication would measurably increase the United...
9 CFR 202.112 - Rule 12: Oral hearing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... hearing shall be conducted by audio-visual telecommunication unless the presiding officer determines that... hearing by audio-visual telecommunication. If the presiding officer determines that a hearing conducted by audio-visual telecommunication would measurably increase the United States Department of Agriculture's...
9 CFR 202.112 - Rule 12: Oral hearing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... hearing shall be conducted by audio-visual telecommunication unless the presiding officer determines that... hearing by audio-visual telecommunication. If the presiding officer determines that a hearing conducted by audio-visual telecommunication would measurably increase the United States Department of Agriculture's...
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
Audio-visual integration through the parallel visual pathways.
Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond
2015-10-22
Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.
Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.
Alm, Magnus; Behne, Dawn
2013-10-01
Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.
Code of Federal Regulations, 2012 CFR
2012-01-01
... hearing to be conducted by telephone or audio-visual telecommunication; (10) Require each party to provide... prior to any deposition to be conducted by telephone or audio-visual telecommunication; (11) Require that any hearing to be conducted by telephone or audio-visual telecommunication be conducted at...
9 CFR 202.110 - Rule 10: Prehearing conference.
Code of Federal Regulations, 2013 CFR
2013-01-01
... conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice to a party; (ii) Is... presiding officer determines that a prehearing conference conducted by audio-visual telecommunication would... conducted by audio-visual telecommunication unless the presiding officer determines that conducting the...
9 CFR 202.110 - Rule 10: Prehearing conference.
Code of Federal Regulations, 2010 CFR
2010-01-01
... conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice to a party; (ii) Is... presiding officer determines that a prehearing conference conducted by audio-visual telecommunication would... conducted by audio-visual telecommunication unless the presiding officer determines that conducting the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... hearing to be conducted by telephone or audio-visual telecommunication; (10) Require each party to provide... prior to any deposition to be conducted by telephone or audio-visual telecommunication; (11) Require that any hearing to be conducted by telephone or audio-visual telecommunication be conducted at...
From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.
ERIC Educational Resources Information Center
Anderson, Peter
1993-01-01
The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)
Research Resources for the Study of African-American and Jewish Relations.
ERIC Educational Resources Information Center
Gubert, Betty Kaplan
1994-01-01
Discusses New York City library resources for the study of African American and Jewish American relations. Highlights include library collections, access to materials, audio and visual materials, international newspapers, clippings, archives, children's books, and acquisitions. A list of the major libraries for the study of African American and…
Hotel and Restaurant Management; A Bibliography of Books and Audio-Visual Materials.
ERIC Educational Resources Information Center
Malkames, James P.; And Others
This bibliography represents a collection of 1,300 book volumes and audiovisual materials collected by the Luzerne County Community College Library in support of the college's Hotel and Restaurant Management curriculum. It covers such diverse topics as advertising, business practices, decoration, nutrition, hotel law, insurance landscaping, health…
Kumar, Deepesh; Verma, Sunny; Bhattacharya, Sutapa; Lahiri, Uttama
2016-06-13
Neurological disorders often manifest themselves in the form of movement deficit on the part of the patient. Conventional rehabilitation often used to address these deficits, though powerful are often monotonous in nature. Adequate audio-visual stimulation can prove to be motivational. In the research presented here we indicate the applicability of audio-visual stimulation to rehabilitation exercises to address at least some of the movement deficits for upper and lower limbs. Added to the audio-visual stimulation, we also use Functional Electrical Stimulation (FES). In our presented research we also show the applicability of FES in conjunction with audio-visual stimulation delivered through VR-based platform for grasping skills of patients with movement disorder.
News video story segmentation method using fusion of audio-visual features
NASA Astrophysics Data System (ADS)
Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang
2007-11-01
News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.
ERIC Educational Resources Information Center
Haberbosch, John F.; And Others
Readings and audiovisual materials, selected especially for educators, related to the study of Afro-American, Hispano-American, and American Indian cultures are included in this 366-item annotated bibliography covering the period from 1861 to 1968. Historical, cultural, and biographical materials are included for each of the three cultures as well…
ERIC Educational Resources Information Center
Levendowski, Jerry C.
The bibliography contains a list of 90 names and addresses of sources of audiovisual instructional materials. For each title a brief description of content, the source, purchase price, rental fee or free use for 16MM films, sound-slidefilms, tapes-records, and transparencies is given. Materials are listed separately by topics: (1) advertising and…
7 CFR 47.15 - Oral hearing before the examiner.
Code of Federal Regulations, 2010 CFR
2010-01-01
... whether the hearing will be conducted by telephone, audio-visual telecommunication, or personal attendance... audio-visual telecommunication. Any motion that the hearing be conducted by telephone or personal... conducted other than by audio-visual telecommunication. (ii) Within 10 days after the examiner issues a...
7 CFR 47.15 - Oral hearing before the examiner.
Code of Federal Regulations, 2011 CFR
2011-01-01
... whether the hearing will be conducted by telephone, audio-visual telecommunication, or personal attendance... audio-visual telecommunication. Any motion that the hearing be conducted by telephone or personal... conducted other than by audio-visual telecommunication. (ii) Within 10 days after the examiner issues a...
ERIC Educational Resources Information Center
Kies, Cosette
1975-01-01
A discussion of the way marketing expertise has employed sophisticated and psychological methods in packing a variety of products, including those items stocked by libraries and media centers; books, records, periodicals and audio-visual materials. (Author)
ERIC Educational Resources Information Center
Williams, Ora
In this bibliography of works by and about black women, books, stories, essays, poems, visual artistic works, musical compositions, and audio-visual materials are listed. Reference works and guides to collections are also included. A chronology of some significant dates in the history of American black women and selected individual bibliographies…
Crossmodal association of auditory and visual material properties in infants.
Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K
2018-06-18
The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
42 CFR 93.506 - Authority of the Administrative Law Judge.
Code of Federal Regulations, 2010 CFR
2010-10-01
... material fact; (16) Conduct any conference or oral argument in person, by telephone, or by audio-visual communication; (17) Take action against any party for failing to follow an order or procedure or for disruptive...
Using Video Materials in English for Technical Sciences: A Case Study
ERIC Educational Resources Information Center
Milosevic, Danica
2017-01-01
In the digital era, university instructors working in English for Technical Sciences (ETS) have opportunities, some might say obligations, to use audio-visual resources to motivate students. Such materials also call on cognitive and constructivist mechanisms thought to improve uptake of the target language (Tarnopolsky, 2012). This chapter reports…
Red, White and Black (and Brown and Yellow): Minorities in America. A Bibliography.
ERIC Educational Resources Information Center
Combined Book Exhibit, Inc., Briarcliff Manor, NY.
This selection of nearly 600 paperback books, art reproductions, films, filmstrips, and records is intended for classroom, reference, and general reading purposes. The audio-visual materials complement the books. The materials included cover the following areas: art and music; African history, government, and culture; Afro-American history and…
Language Practice with Multimedia Supported Web-Based Grammar Revision Material
ERIC Educational Resources Information Center
Baturay, Meltem Huri; Daloglu, Aysegul; Yildirim, Soner
2010-01-01
The aim of this study was to investigate the perceptions of elementary-level English language learners towards web-based, multimedia-annotated grammar learning. WEBGRAM, a system designed to provide supplementary web-based grammar revision material, uses audio-visual aids to enrich the contextual presentation of grammar and allows learners to…
Effects of Audio-Visual Information on the Intelligibility of Alaryngeal Speech
ERIC Educational Resources Information Center
Evitts, Paul M.; Portugal, Lindsay; Van Dine, Ami; Holler, Aline
2010-01-01
Background: There is minimal research on the contribution of visual information on speech intelligibility for individuals with a laryngectomy (IWL). Aims: The purpose of this project was to determine the effects of mode of presentation (audio-only, audio-visual) on alaryngeal speech intelligibility. Method: Twenty-three naive listeners were…
Code of Federal Regulations, 2012 CFR
2012-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
Code of Federal Regulations, 2013 CFR
2013-01-01
... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...
Summarizing Audiovisual Contents of a Video Program
NASA Astrophysics Data System (ADS)
Gong, Yihong
2003-12-01
In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.
Audio-visual affective expression recognition
NASA Astrophysics Data System (ADS)
Huang, Thomas S.; Zeng, Zhihong
2007-11-01
Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.
Assessment of rural soundscapes with high-speed train noise.
Lee, Pyoung Jik; Hong, Joo Young; Jeon, Jin Yong
2014-06-01
In the present study, rural soundscapes with high-speed train noise were assessed through laboratory experiments. A total of ten sites with varying landscape metrics were chosen for audio-visual recording. The acoustical characteristics of the high-speed train noise were analyzed using various noise level indices. Landscape metrics such as the percentage of natural features (NF) and Shannon's diversity index (SHDI) were adopted to evaluate the landscape features of the ten sites. Laboratory experiments were then performed with 20 well-trained listeners to investigate the perception of high-speed train noise in rural areas. The experiments consisted of three parts: 1) visual-only condition, 2) audio-only condition, and 3) combined audio-visual condition. The results showed that subjects' preference for visual images was significantly related to NF, the number of land types, and the A-weighted equivalent sound pressure level (LAeq). In addition, the visual images significantly influenced the noise annoyance, and LAeq and NF were the dominant factors affecting the annoyance from high-speed train noise in the combined audio-visual condition. In addition, Zwicker's loudness (N) was highly correlated with the annoyance from high-speed train noise in both the audio-only and audio-visual conditions. © 2013.
Integrating Clinical Neuropsychology into the Undergraduate Curriculum.
ERIC Educational Resources Information Center
Puente, Antonio E.; And Others
1991-01-01
Claims little information exists in undergraduate education about clinical neuropsychology. Outlines an undergraduate neuropsychology course and proposes ways to integrate the subject into existing undergraduate psychology courses. Suggests developing specialized audio-visual materials for telecourses or existing courses. (NL)
Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy
Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun
2014-01-01
The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651
Research into Teleconferencing
1981-02-01
Wichman (1970) found more cooperation under conditions of audio- visual communication than conditions of audio communication alone. Laplante (1971) found...was found for audio teleconferences. These results, taken with the results concerning group perfor- mance, seem to indicate that visual communication gives
The Practical Audio-Visual Handbook for Teachers.
ERIC Educational Resources Information Center
Scuorzo, Herbert E.
The use of audio/visual media as an aid to instruction is a common practice in today's classroom. Most teachers, however, have little or no formal training in this field and rarely a knowledgeable coordinator to help them. "The Practical Audio-Visual Handbook for Teachers" discusses the types and mechanics of many of these media forms and proposes…
ERIC Educational Resources Information Center
Aleman-Centeno, Josefina R.
1983-01-01
Discusses the development and evaluation of CAVIS, which consists of an Apple microcomputer used with audiovisual dialogs. Includes research on the effects of three conditions: (1) computer with audio and visual, (2) computer with audio alone and (3) audio alone in short-term and long-term recall. (EKN)
Audio-visual temporal perception in children with restored hearing.
Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David
2017-05-01
It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.
Exclusively visual analysis of classroom group interactions
NASA Astrophysics Data System (ADS)
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-12-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.
van Hoesel, Richard J M
2015-04-01
One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.
ERIC Educational Resources Information Center
ROBINAULT, ISABEL P.
THIS PUBLICATION LISTS 127 FILMS AND FILMSTRIPS RELATED TO THE DIAGNOSIS AND HABILITATION OF CEREBRAL PALSIED PERSONS WITH VARYING AGES, NEEDS, AND CIRCUMSTANCES. THE TITLES ARE LISTED ALPHABETICALLY IN SECTIONS--BASIC SCIENCES AND BASIC INFORMATION, ACTIVITIES OF DAILY LIVING, MEDICAL ASPECTS AND THERAPEUTIC MANAGEMENT, EVALUATION AND…
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.
Kanaya, Shoko; Yokosawa, Kazuhiko
2011-02-01
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.
Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults
Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343
Gautam, Anjali; Bhambal, Ajay; Moghe, Swapnil
2018-01-01
Children with special needs face unique challenges in day-to-day practice. They are dependent on their close ones for everything. To improve oral hygiene in such visually impaired children, undue training and education are required. Braille is an important language for reading and writing for the visually impaired. It helps them understand and visualize the world via touch. Audio aids are being used to impart health education to the visually impaired. Tactile models help them perceive things which they cannot visualize and hence are an important learning tool. This study aimed to assess the improvement in oral hygiene by audio aids and Braille and tactile models in visually impaired children aged 6-16 years of Bhopal city. This was a prospective study. Sixty visually impaired children aged 6-16 years were selected and randomly divided into three groups (20 children each). Group A: audio aids + Braille, Group B: audio aids + tactile models, and Group C: audio aids + Braille + tactile models. Instructions were given for maintaining good oral hygiene and brushing techniques were explained to all children. After 3 months' time, the oral hygiene status was recorded and compared using plaque and gingival index. ANNOVA test was used. The present study showed a decrease in the mean plaque and gingival scores at all time intervals in individual group as compared to that of the baseline that was statistically significant. The study depicts that the combination of audio aids, Braille and tactile models is an effective way to provide oral health education and improve oral health status of visually impaired children.
ERIC Educational Resources Information Center
Naturescope, 1987
1987-01-01
Contains a glossary of terms related to endangered species and lists reference books, children's books, audio-visual materials, software, and activity sources on the topics. Also identifies wildlife laws and explains what they mean. An index of issues of "Ranger Rick," which includes articles on endangered species, is included. (ML)
Code of Federal Regulations, 2014 CFR
2014-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...
Code of Federal Regulations, 2012 CFR
2012-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...
ERIC Educational Resources Information Center
Kim, Yong-Jin; Chang, Nam-Kee
2001-01-01
Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…
ERIC Educational Resources Information Center
Narayan, Shankar
This discussion of the importance and scope of audiovisual aids in the educational programs and activities designed for children in developing countries includes the significance of audiovisual aids in pre-school and primary school education, types of audiovisual aids, learning from pictures, creative art materials, play materials, and problems…
A Self-Paced Physical Geology Laboratory.
ERIC Educational Resources Information Center
Watson, Donald W.
1983-01-01
Describes a self-paced geology course utilizing a diversity of instructional techniques, including maps, models, samples, audio-visual materials, and a locally developed laboratory manual. Mechanical features are laboratory exercises, followed by unit quizzes; quizzes are repeated until the desired level of competence is attained. (Author/JN)
Code of Federal Regulations, 2014 CFR
2014-04-01
... sought to further scholarly research. (h) Record means all books, papers, maps, photographs, machine... as the cost of space, heating, or lighting of the facility in which the records are stored. (d... copies can take the form of, among other things, paper copy, microfilm, audio-visual materials, or...
36 CFR § 902.82 - Fee schedule.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...
Code of Federal Regulations, 2013 CFR
2013-04-01
... sought to further scholarly research. (h) Record means all books, papers, maps, photographs, machine... as the cost of space, heating, or lighting of the facility in which the records are stored. (d... copies can take the form of, among other things, paper copy, microfilm, audio-visual materials, or...
Code of Federal Regulations, 2012 CFR
2012-04-01
... sought to further scholarly research. (h) Record means all books, papers, maps, photographs, machine... as the cost of space, heating, or lighting of the facility in which the records are stored. (d... copies can take the form of, among other things, paper copy, microfilm, audio-visual materials, or...
Code of Federal Regulations, 2011 CFR
2011-04-01
... sought to further scholarly research. (h) Record means all books, papers, maps, photographs, machine... as the cost of space, heating, or lighting of the facility in which the records are stored. (d... copies can take the form of, among other things, paper copy, microfilm, audio-visual materials, or...
Code of Federal Regulations, 2012 CFR
2012-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... FOIA request. Such copies can take the form of paper copy, microform, audio-visual materials, or... research. (n) Non-Commercial Scientific Institution refers to an institution that is not operated on a...
Code of Federal Regulations, 2014 CFR
2014-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... FOIA request. Such copies can take the form of paper copy, microform, audio-visual materials, or... research. (n) Non-Commercial Scientific Institution refers to an institution that is not operated on a...
Code of Federal Regulations, 2010 CFR
2010-04-01
... sought to further scholarly research. (h) Record means all books, papers, maps, photographs, machine... as the cost of space, heating, or lighting of the facility in which the records are stored. (d... copies can take the form of, among other things, paper copy, microfilm, audio-visual materials, or...
22 CFR 61.5 - Authentication procedures-Imports.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Authentication procedures-Imports. 61.5 Section 61.5 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.5 Authentication procedures—Imports. (a) Applicants seeking Department...
Population Education Accessions List, May-August 1999.
ERIC Educational Resources Information Center
United Nations Educational, Scientific and Cultural Organization, Bangkok (Thailand). Principal Regional Office for Asia and the Pacific.
This document is comprised of output from the Regional Clearinghouse on Population Education and Communication (RCPEC) computerized bibliographic database on reproductive and sexual health and geography. Entries are categorized into four parts: (1) "Population Education"; (2) "Knowledge-base Information"; (3) "Audio-Visual and IEC Materials; and…
45 CFR 2104.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2012 CFR
2012-10-01
... FINE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE COMMISSION OF FINE ARTS § 2104.150 Program accessibility: Existing facilities. (a) General... of achieving program accessibility include— (i) Using audio-visual materials and devices to depict...
45 CFR 2104.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
... FINE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE COMMISSION OF FINE ARTS § 2104.150 Program accessibility: Existing facilities. (a) General... of achieving program accessibility include— (i) Using audio-visual materials and devices to depict...
45 CFR 2104.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2011 CFR
2011-10-01
... FINE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE COMMISSION OF FINE ARTS § 2104.150 Program accessibility: Existing facilities. (a) General... of achieving program accessibility include— (i) Using audio-visual materials and devices to depict...
45 CFR 2104.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
... FINE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE COMMISSION OF FINE ARTS § 2104.150 Program accessibility: Existing facilities. (a) General... of achieving program accessibility include— (i) Using audio-visual materials and devices to depict...
45 CFR 2104.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
... FINE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE COMMISSION OF FINE ARTS § 2104.150 Program accessibility: Existing facilities. (a) General... of achieving program accessibility include— (i) Using audio-visual materials and devices to depict...
Advanced Texas Studies: Curriculum Guide.
ERIC Educational Resources Information Center
Harlandale Independent School District, San Antonio, TX. Career Education Center.
The guide is arranged in vertical columns relating curriculum concepts in Texas studies to curriculum performance objectives, career concepts and career performance objectives, suggested teaching methods, and audio-visual and resource materials. Career information is included on 24 related occupations. Space is provided for teachers' notes which…
ERIC Educational Resources Information Center
Lawrence Unified School District 497, KS.
The elementary level career education instructional materials are arranged by grade level. Separate sections are devoted to each level and include an overview of the curriculum with objectives, activities, and resources (speakers, on-site visits, audio visuals, books, and kits) for each subject area covered. Emphasizing career awareness, each…
Audio-visual presentation of information for informed consent for participation in clinical trials.
Synnot, Anneliese; Ryan, Rebecca; Prictor, Megan; Fetherstonhaugh, Deirdre; Parker, Barbara
2014-05-09
Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented, for example, on the Internet or on DVD) are one such method. We updated a 2008 review of the effects of these interventions for informed consent for trial participation. To assess the effects of audio-visual information interventions regarding informed consent compared with standard information or placebo audio-visual interventions regarding informed consent for potential clinical trial participants, in terms of their understanding, satisfaction, willingness to participate, and anxiety or other psychological distress. We searched: the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 6, 2012; MEDLINE (OvidSP) (1946 to 13 June 2012); EMBASE (OvidSP) (1947 to 12 June 2012); PsycINFO (OvidSP) (1806 to June week 1 2012); CINAHL (EbscoHOST) (1981 to 27 June 2012); Current Contents (OvidSP) (1993 Week 27 to 2012 Week 26); and ERIC (Proquest) (searched 27 June 2012). We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. We included randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or verbal information), with standard forms of information provision or placebo audio-visual information, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to consider participating in a real or hypothetical clinical study. (In the earlier version of this review we only included studies evaluating informed consent interventions for real studies). Two authors independently assessed studies for inclusion and extracted data. We synthesised the findings using meta-analysis, where possible, and narrative synthesis of results. We assessed the risk of bias of individual studies and considered the impact of the quality of the overall evidence on the strength of the results. We included 16 studies involving data from 1884 participants. Nine studies included participants considering real clinical trials, and eight included participants considering hypothetical clinical trials, with one including both. All studies were conducted in high-income countries.There is still much uncertainty about the effect of audio-visual informed consent interventions on a range of patient outcomes. However, when considered across comparisons, we found low to very low quality evidence that such interventions may slightly improve knowledge or understanding of the parent trial, but may make little or no difference to rate of participation or willingness to participate. Audio-visual presentation of informed consent may improve participant satisfaction with the consent information provided. However its effect on satisfaction with other aspects of the process is not clear. There is insufficient evidence to draw conclusions about anxiety arising from audio-visual informed consent. We found conflicting, very low quality evidence about whether audio-visual interventions took more or less time to administer. No study measured researcher satisfaction with the informed consent process, nor ease of use.The evidence from real clinical trials was rated as low quality for most outcomes, and for hypothetical studies, very low. We note, however, that this was in large part due to poor study reporting, the hypothetical nature of some studies and low participant numbers, rather than inconsistent results between studies or confirmed poor trial quality. We do not believe that any studies were funded by organisations with a vested interest in the results. The value of audio-visual interventions as a tool for helping to enhance the informed consent process for people considering participating in clinical trials remains largely unclear, although trends are emerging with regard to improvements in knowledge and satisfaction. Many relevant outcomes have not been evaluated in randomised trials. Triallists should continue to explore innovative methods of providing information to potential trial participants during the informed consent process, mindful of the range of outcomes that the intervention should be designed to achieve, and balancing the resource implications of intervention development and delivery against the purported benefits of any intervention.More trials, adhering to CONSORT standards, and conducted in settings and populations underserved in this review, i.e. low- and middle-income countries and people with low literacy, would strengthen the results of this review and broaden its applicability. Assessing process measures, such as time taken to administer the intervention and researcher satisfaction, would inform the implementation of audio-visual consent materials.
Wilbiks, Jonathan M P; Dyson, Benjamin J
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.
Wilbiks, Jonathan M. P.; Dyson, Benjamin J.
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790
Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P
2017-04-01
A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.
Audio aided electro-tactile perception training for finger posture biofeedback.
Vargas, Jose Gonzalez; Yu, Wenwei
2008-01-01
Visual information is one of the prerequisites for most biofeedback studies. The aim of this study is to explore how the usage of an audio aided training helps in the learning process of dynamical electro-tactile perception without any visual feedback. In this research, the electrical simulation patterns associated with the experimenter's finger postures and motions were presented to the subjects. Along with the electrical stimulation patterns 2 different types of information, verbal and audio information on finger postures and motions, were presented to the verbal training subject group (group 1) and audio training subject group (group 2), respectively. The results showed an improvement in the ability to distinguish and memorize electrical stimulation patterns correspondent to finger postures and motions without visual feedback, and with audio tones aid, the learning was faster and the perception became more precise after training. Thus, this study clarified that, as a substitution to visual presentation, auditory information could help effectively in the formation of electro-tactile perception. Further research effort needed to make clear the difference between the visual guided and audio aided training in terms of information compilation, post-training effect and robustness of the perception.
Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2014-01-01
Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132
Selective attention modulates the direction of audio-visual temporal recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2014-01-01
Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi
2015-11-01
Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
A New Program to Teach Nuclear and Radiochemistry to Undergraduates.
ERIC Educational Resources Information Center
Catchen, Gary L.; Canelos, James
1988-01-01
Follows the development of a course in nuclear and radiochemistry at Penn State. Lists specific nuclear science topics covered in the undergraduate level course. Describes audio-visual materials that have been developed for the course and includes a survey of students taking the course. (ML)
Code of Federal Regulations, 2012 CFR
2012-01-01
... included in direct costs are overhead expenses such as costs of space, and heating or lighting the facility... request. Such copies can take the form of paper, microform, audio-visual materials, or electronic records... institution of vocational education, that operates a program or programs of scholarly research. (i) The term...
Code of Federal Regulations, 2013 CFR
2013-01-01
... included in direct costs are overhead expenses such as costs of space, and heating or lighting the facility... FOIA request. Such copies can take the form of paper copy, microform, audio-visual materials, or... operates a program or programs of scholarly research. (i) The term non-commercial scientific institution...
Code of Federal Regulations, 2014 CFR
2014-01-01
... included in direct costs are overhead expenses such as costs of space, and heating or lighting the facility... FOIA request. Such copies can take the form of paper copy, microform, audio-visual materials, or... operates a program or programs of scholarly research. (i) The term non-commercial scientific institution...
Code of Federal Regulations, 2012 CFR
2012-01-01
... included in direct costs are overhead expenses such as costs of space, and heating or lighting the facility... FOIA request. Such copies can take the form of paper copy, microform, audio-visual materials, or... operates a program or programs of scholarly research. (i) The term non-commercial scientific institution...
Code of Federal Regulations, 2014 CFR
2014-01-01
... included in direct costs are overhead expenses such as costs of space, and heating or lighting the facility... request. Such copies can take the form of paper, microform, audio-visual materials, or electronic records... institution of vocational education, that operates a program or programs of scholarly research. (i) The term...
36 CFR § 1120.2 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... FOIA request. Such copies can take the form of paper copy, microform, audio-visual materials, or... research. (n) Non-Commercial Scientific Institution refers to an institution that is not operated on a...
Code of Federal Regulations, 2012 CFR
2012-01-01
... duplicating machinery. Not included in direct costs are overhead expenses such as costs of space, and heating... a FOIA request. Such copies can take the form of paper copy, microfilm, audio-visual materials, or... vocational education, which operates a program or programs of scholarly research. (7) The term non-commercial...
Code of Federal Regulations, 2013 CFR
2013-01-01
... included in direct costs are overhead expenses such as costs of space, and heating or lighting the facility... request. Such copies can take the form of paper, microform, audio-visual materials, or electronic records... institution of vocational education, that operates a program or programs of scholarly research. (i) The term...
Code of Federal Regulations, 2014 CFR
2014-01-01
... duplicating machinery. Not included in direct costs are overhead expenses such as costs of space, and heating... a FOIA request. Such copies can take the form of paper copy, microfilm, audio-visual materials, or... vocational education, which operates a program or programs of scholarly research. (7) The term non-commercial...
Code of Federal Regulations, 2013 CFR
2013-01-01
... duplicating machinery. Not included in direct costs are overhead expenses such as costs of space, and heating... a FOIA request. Such copies can take the form of paper copy, microfilm, audio-visual materials, or... vocational education, which operates a program or programs of scholarly research. (7) The term non-commercial...
22 CFR 61.4 - Certification procedures-Exports.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Certification procedures-Exports. 61.4 Section 61.4 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.4 Certification procedures—Exports. (a) Applicants seeking certification of...
45 CFR 1153.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2011 CFR
2011-10-01
... FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE NATIONAL ENDOWMENT FOR THE ARTS...), alternative methods of achieving program accessibility include— (i) Using audio-visual materials and devices...
45 CFR 1153.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2012 CFR
2012-10-01
... FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE NATIONAL ENDOWMENT FOR THE ARTS...), alternative methods of achieving program accessibility include— (i) Using audio-visual materials and devices...
45 CFR 1153.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
... FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE NATIONAL ENDOWMENT FOR THE ARTS...), alternative methods of achieving program accessibility include— (i) Using audio-visual materials and devices...
45 CFR 1153.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
... FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE NATIONAL ENDOWMENT FOR THE ARTS...), alternative methods of achieving program accessibility include— (i) Using audio-visual materials and devices...
45 CFR 1153.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
... FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE NATIONAL ENDOWMENT FOR THE ARTS...), alternative methods of achieving program accessibility include— (i) Using audio-visual materials and devices...
Semiotic Criteria for Evaluating Instructional HyperMedia.
ERIC Educational Resources Information Center
Tucker, Susan A.; Dempsey, John V.
This report describes hypermedia as a non-linear interlinked representation of textual, graphic, visual and audio material, that enables students to connect large bodies of information while developing analytical skills necessary to think critically about this information. It is noted that the use of microcomputers for hypermedia instruction…
22 CFR 61.4 - Certification procedures-Exports.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certification procedures-Exports. 61.4 Section 61.4 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.4 Certification procedures—Exports. (a) Applicants seeking certification of...
AUDIO-VISUAL INSTRUCTION, AN ADMINISTRATIVE HANDBOOK.
ERIC Educational Resources Information Center
Missouri State Dept. of Education, Jefferson City.
THIS HANDBOOK WAS DESIGNED FOR USE BY SCHOOL ADMINISTRATORS IN DEVELOPING A TOTAL AUDIOVISUAL (AV) PROGRAM. ATTENTION IS GIVEN TO THE IMPORTANCE OF AUDIOVISUAL MEDIA TO EFFECTIVE INSTRUCTION, ADMINISTRATIVE PERSONNEL REQUIREMENTS FOR AN AV PROGRAM, BUDGETING FOR AV INSTRUCTION, PROPER UTILIZATION OF AV MATERIALS, SELECTION OF AV EQUIPMENT AND…
ERIC Educational Resources Information Center
Lustbader, Sara
1995-01-01
Describes a program for teaching about tropical rainforests in a concrete way using what's outside the door. This activity uses an eastern deciduous hardwood forest as an example. Step-by-step instructions include introductory activities, plus descriptions of stations in the forest to be visited. Resources include books, audio-visual materials,…
Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
2015-10-01
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Shockey, Carolyn, Ed.
A catalog of audio and visual meterials for teaching courses on or illustrating all aspects of audiovisual instruction was developed with a broad coverage of those areas or interests pertinent to the field of instructional communications. The listings should be of value to the college instructor in the area of instructional materials, as well as…
Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
Media/Device Configurations for Platoon Leader Tactical Training
1985-02-01
munication and visual communication sig- na ls, VInputs to the The device should simulate the real- Platoon Leader time receipt of all tactical voice...communication, audio and visual battle- field cues, and visual communication signals. 14- Table 4 (Continued) Functional Capability Categories and...battlefield cues, and visual communication signals. 0.8 Receipt of limited tactical voice communication, plus audio and visual battlefield cues, and visual
Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.
Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu
2018-05-01
Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.
Goebl, Werner
2015-01-01
Nonverbal auditory and visual communication helps ensemble musicians predict each other’s intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers’ intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano–piano and piano–violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos’ access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos’ own auditory feedback. Differences were observed in how successfully piano–piano and piano–violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists’ success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used. PMID:26279610
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Definitions. 1401.3 Section 1401.3 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY PUBLIC AVAILABILITY OF INFORMATION § 1401.3 Definitions. For the... paper, microform, audio-visual materials, or machine-readable documentation. ONDCP will provide a copy...
22 CFR 61.9 - General information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...
22 CFR 61.9 - General information.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...
22 CFR 61.9 - General information.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...
22 CFR 61.6 - Consultation with subject matter specialists.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Consultation with subject matter specialists. 61.6 Section 61.6 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.6 Consultation with subject matter specialists. (a) The...
22 CFR 61.6 - Consultation with subject matter specialists.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Consultation with subject matter specialists. 61.6 Section 61.6 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.6 Consultation with subject matter specialists. (a) The...
22 CFR 61.6 - Consultation with subject matter specialists.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Consultation with subject matter specialists. 61.6 Section 61.6 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.6 Consultation with subject matter specialists. (a) The...
ERIC Educational Resources Information Center
Lin, Huifen
2012-01-01
For the past few decades, instructional materials enriched with multimedia elements have enjoyed increasing popularity. Multimedia-based instruction incorporating stimulating visuals, authentic audios, and interactive animated graphs of different kinds all provide additional and valuable opportunities for students to learn beyond what conventional…
ERIC Educational Resources Information Center
Raman, Madhavi Gayathri; Vijaya
2016-01-01
This paper captures the design of a comprehensive curriculum incorporating the four skills based exclusively on the use of parallel audio-visual and written texts. We discuss the use of authentic materials to teach English to Indian undergraduates aged 18 to 20 years. Specifically, we talk about the use of parallel reading (screen-play) and…
Code of Federal Regulations, 2012 CFR
2012-04-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...
Code of Federal Regulations, 2011 CFR
2011-07-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... machinery. Not included in direct costs are overhead expenses such as costs of space and heating or lighting... request. Such copies can take the form of paper copy, microfilm, audio-visual materials, or machine...
Code of Federal Regulations, 2013 CFR
2013-07-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... machinery. Not included in direct costs are overhead expenses such as costs of space and heating or lighting... request. Such copies can take the form of paper copy, microfilm, audio-visual materials, or machine...
Code of Federal Regulations, 2013 CFR
2013-04-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...
Code of Federal Regulations, 2014 CFR
2014-04-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...
Code of Federal Regulations, 2011 CFR
2011-04-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... overhead expenses such as costs of space and heating or lighting of the facility in which the records are... of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g., magnetic...
Code of Federal Regulations, 2012 CFR
2012-07-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... machinery. Not included in direct costs are overhead expenses such as costs of space and heating or lighting... request. Such copies can take the form of paper copy, microfilm, audio-visual materials, or machine...
32 CFR 1662.6 - Fee schedule; waiver of fees.
Code of Federal Regulations, 2012 CFR
2012-07-01
... as costs of space, and heating or lighting the facility in which the records are stored. (2) The term... copies may take the form of paper copy, microform, audio-visual materials, or machine readable... institution of vocational education, which operates a program or programs of scholarly research. (7) The term...
32 CFR 1662.6 - Fee schedule; waiver of fees.
Code of Federal Regulations, 2014 CFR
2014-07-01
... as costs of space, and heating or lighting the facility in which the records are stored. (2) The term... copies may take the form of paper copy, microform, audio-visual materials, or machine readable... institution of vocational education, which operates a program or programs of scholarly research. (7) The term...
Code of Federal Regulations, 2014 CFR
2014-07-01
... requesters, subject to the limitations of paragraph (c) of this section. For a paper photocopy of a record... machinery. Not included in direct costs are overhead expenses such as costs of space and heating or lighting... request. Such copies can take the form of paper copy, microfilm, audio-visual materials, or machine...
22 CFR 61.9 - General information.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...
Game-Based Evacuation Drill Using Augmented Reality and Head-Mounted Display
ERIC Educational Resources Information Center
Kawai, Junya; Mitsuhara, Hiroyuki; Shishibori, Masami
2016-01-01
Purpose: Evacuation drills should be more realistic and interactive. Focusing on situational and audio-visual realities and scenario-based interactivity, the authors have developed a game-based evacuation drill (GBED) system that presents augmented reality (AR) materials on tablet computers. The paper's current research purpose is to improve…
School Building Design and Audio-Visual Resources.
ERIC Educational Resources Information Center
National Committee for Audio-Visual Aids in Education, London (England).
The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…
22 CFR 61.9 - General information.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Purpose. 61.1 Section 61.1 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.1 Purpose. The Department of State administers the “Beirut Agreement of 1948”, a multinational treaty...
22 CFR 61.3 - Certification and authentication criteria.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Certification and authentication criteria. 61.3 Section 61.3 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.3 Certification and authentication criteria. (a) The Department shall...
22 CFR 61.6 - Consultation with subject matter specialists.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Consultation with subject matter specialists. 61.6 Section 61.6 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.6 Consultation with subject matter specialists. (a) The...
32 CFR 705.17 - Participation guidelines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... limited to those occasions which are: In keeping with the dignity of the Department of the Navy, in good.... Officers in command will screen all requests for use of material and personnel in Navy-sponsored social... government. (m) Some participation in or support of commercially sponsored programs on audio or visual media...
32 CFR 705.17 - Participation guidelines.
Code of Federal Regulations, 2014 CFR
2014-07-01
... limited to those occasions which are: In keeping with the dignity of the Department of the Navy, in good.... Officers in command will screen all requests for use of material and personnel in Navy-sponsored social... government. (m) Some participation in or support of commercially sponsored programs on audio or visual media...
32 CFR 705.17 - Participation guidelines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... limited to those occasions which are: In keeping with the dignity of the Department of the Navy, in good.... Officers in command will screen all requests for use of material and personnel in Navy-sponsored social... government. (m) Some participation in or support of commercially sponsored programs on audio or visual media...
32 CFR 705.17 - Participation guidelines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... limited to those occasions which are: In keeping with the dignity of the Department of the Navy, in good.... Officers in command will screen all requests for use of material and personnel in Navy-sponsored social... government. (m) Some participation in or support of commercially sponsored programs on audio or visual media...
32 CFR 705.17 - Participation guidelines.
Code of Federal Regulations, 2012 CFR
2012-07-01
... limited to those occasions which are: In keeping with the dignity of the Department of the Navy, in good.... Officers in command will screen all requests for use of material and personnel in Navy-sponsored social... government. (m) Some participation in or support of commercially sponsored programs on audio or visual media...
22 CFR 61.6 - Consultation with subject matter specialists.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Consultation with subject matter specialists. 61.6 Section 61.6 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.6 Consultation with subject matter specialists. (a) The...
ERIC Educational Resources Information Center
Tee, Lim Huck; Fong, Tang Wan
1973-01-01
Penang, Malaysia is undergoing rapid industrialization to stimulate its economy. A survey was conducted to determine what technical, scientific, and commercial information sources were available. Areas covered in the survey were library facilities, journals, commercial reference works and audio-visual materials. (DH)
Free-Loan Media for French: A Teacher's Guide.
ERIC Educational Resources Information Center
Veitz, M. Frances, Ed.
Designed to assist French instructors in introducing France to their students, this guidebook provides an annotated list of over 300 audio-visual materials and realia that are available on a free-loan basis. The guide lists films, videotapes, filmstrips, slide collections, pamphlets, factsheets, posters, records, tapes, and booklets available in…
A Portable Presentation Package for Audio-Visual Instruction. Technical Documentary Report.
ERIC Educational Resources Information Center
Smith, Edgar A.; And Others
The Portable Presentation Package is a prototype of an audiovisual equipment package designed to facilitate technical training in remote areas, situations in which written communications are difficult, or in situations requiring rapid presentation of instructional material. The major criteria employed in developing the package were (1) that the…
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.
Nava, Elena; Grassi, Massimo; Turati, Chiara
2016-01-01
Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.
7 CFR 1.141 - Procedure for hearing.
Code of Federal Regulations, 2010 CFR
2010-01-01
...-visual telecommunication, or personal attendance of any individual expected to participate in the hearing... rather than by audio-visual telecommunication. Any motion that the hearing be conducted by telephone or... be conducted other than by audio-visual telecommunication. (ii) Within 10 days after the Judge issues...
7 CFR 1.141 - Procedure for hearing.
Code of Federal Regulations, 2011 CFR
2011-01-01
...-visual telecommunication, or personal attendance of any individual expected to participate in the hearing... rather than by audio-visual telecommunication. Any motion that the hearing be conducted by telephone or... be conducted other than by audio-visual telecommunication. (ii) Within 10 days after the Judge issues...
Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection
Denison, Rachel N.; Driver, Jon; Ruff, Christian C.
2013-01-01
Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067
Audio in Courseware: Design Knowledge Issues.
ERIC Educational Resources Information Center
Aarntzen, Diana
1993-01-01
Considers issues that need to be addressed when incorporating audio in courseware design. Topics discussed include functions of audio in courseware; the relationship between auditive and visual information; learner characteristics in relation to audio; events of instruction; and audio characteristics, including interactivity and speech technology.…
Behavioral Science Design for Audio-Visual Software Development
ERIC Educational Resources Information Center
Foster, Dennis L.
1974-01-01
A discussion of the basic structure of the behavioral audio-visual production which consists of objectives analysis, approach determination, technical production, fulfillment evaluation, program refinement, implementation, and follow-up. (Author)
The impact of modality and working memory capacity on achievement in a multimedia environment
NASA Astrophysics Data System (ADS)
Stromfors, Charlotte M.
This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.
Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko
2014-01-01
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
ERIC Educational Resources Information Center
Wilson, E. C.
This catalog contains a listing of the audio-visual aids used in the Alabama State Module of the Appalachian Adult Basic Education Program. Aids listed include filmstrips utilized by the following organizations: Columbia, South Carolina State Department of Education; Raleigh, North Carolina State Department of Education; Alden Films of Brooklyn,…
Proper Use of Audio-Visual Aids: Essential for Educators.
ERIC Educational Resources Information Center
Dejardin, Conrad
1989-01-01
Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)
Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.
Asadpour, Vahid; Towhidkhah, Farzad; Homayounpour, Mohammad Mehdi
2006-10-01
Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.
Audio-visual synchrony and feature-selective attention co-amplify early visual processing.
Keitel, Christian; Müller, Matthias M
2016-05-01
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study-
Yahata, Izumi; Kanno, Akitake; Sakamoto, Shuichi; Takanashi, Yoshitaka; Takata, Shiho; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2016-01-01
The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage. PMID:28030631
A Method for Establishing a Depreciated Monetary Value for Print Collections.
ERIC Educational Resources Information Center
Marman, Edward
1995-01-01
Outlines a method for establishing a depreciated value of a library collection and includes an example of applying the formula for calculating depreciation. The method is based on the useful life of books, other print, and audio visual materials; their original cost; and on sampling subsets or sections of the collection. (JKP)
22 CFR 61.8 - Coordination with United States Customs Service.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Coordination with United States Customs Service. 61.8 Section 61.8 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.8 Coordination with United States Customs Service. (a) Nothing...
22 CFR 61.8 - Coordination with United States Customs Service.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Coordination with United States Customs Service. 61.8 Section 61.8 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.8 Coordination with United States Customs Service. (a) Nothing...
22 CFR 61.7 - Review and appeal procedures.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Review and appeal procedures. 61.7 Section 61.7 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.7 Review and appeal procedures. (a) An applicant may request a formal review of any adverse...
22 CFR 61.7 - Review and appeal procedures.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Review and appeal procedures. 61.7 Section 61.7 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.7 Review and appeal procedures. (a) An applicant may request a formal review of any adverse...
22 CFR 61.7 - Review and appeal procedures.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Review and appeal procedures. 61.7 Section 61.7 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.7 Review and appeal procedures. (a) An applicant may request a formal review of any adverse...
22 CFR 61.8 - Coordination with United States Customs Service.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Coordination with United States Customs Service. 61.8 Section 61.8 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.8 Coordination with United States Customs Service. (a) Nothing...
A Bibliography on the Black American.
ERIC Educational Resources Information Center
United States Air Forces in Europe, Wiesbaden (West Germany).
This bibliography provides a comprehensive listing of book and audio-visual materials of interest to, by, and about Black Americans. Annotations are given for a majority of the books and selections are marked if they are recommended for all libraries or for large libraries. Books are listed under subject headings including: Africa, art, Black…
Code of Federal Regulations, 2011 CFR
2011-07-01
... include providing adaptive hardware and software for computers, electronic visual aids, braille devices, talking calculators, magnifiers, audio recordings and braille or large-print materials. For persons with... vision or hearing impaired, e.g., by making an announcement available in braille, in large print, or on...
The Journal of Suggestive-Accelerative Learning and Teaching, Volume 3, Number 4, Winter 1978.
ERIC Educational Resources Information Center
Schuster, Donald H., Ed.
Contents of this issue are as follows: "Audio-Visual Material Development for Suggestopedic Classes" by Charles Loch (16 pages), "Suggestopedia Applied to Elementary Reading Instruction" by Allyn Prichard and Jean Taylor (5 pages), "Suggestology or Hypnosis--It's All in the Label" by Harry E. Stanton (5 pages),…
Life on the Tidal Mudflats: Elkhorn Slough.
ERIC Educational Resources Information Center
Andresen, Ruth
Life in an estuarine environment is studied in this set of audio-visual materials prepared for grades 6-12. A 71-frame colored filmstrip, cassette tape narration, and teacher's guide focus upon Elkhorn Slough, a tidal mudflat in the Monterey Bay area, California. Topics examined range from river drainage and the effects of pollution on living…
ERIC Educational Resources Information Center
Manitoba Dept. of Education, Winnipeg.
This annotated bibliography of audiovisual materials for grades five through twelve contains resources available from the Manitoba (Canada) Education Library. These films are recommended by the Manitoba Department of Education in support of the province's family life education curriculum. The topics covered include the maturation process,…
Video Streaming in Online Learning
ERIC Educational Resources Information Center
Hartsell, Taralynn; Yuen, Steve Chi-Yin
2006-01-01
The use of video in teaching and learning is a common practice in education today. As learning online becomes more of a common practice in education, streaming video and audio will play a bigger role in delivering course materials to online learners. This form of technology brings courses alive by allowing online learners to use their visual and…
22 CFR 61.7 - Review and appeal procedures.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Review and appeal procedures. 61.7 Section 61.7 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.7 Review and appeal procedures. (a) An applicant may request a formal review of any adverse...
22 CFR 61.8 - Coordination with United States Customs Service.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Coordination with United States Customs Service. 61.8 Section 61.8 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.8 Coordination with United States Customs Service. (a) Nothing...
22 CFR 61.8 - Coordination with United States Customs Service.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Coordination with United States Customs Service. 61.8 Section 61.8 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.8 Coordination with United States Customs Service. (a) Nothing...
Unit: Micro-Organisms and Man, Inspection Pack, National Trial Print.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
This unit, intended for students in grades eight or nine, is a revised version of ED 053 990. The teacher's guide lists the aims of the unit, behavioral objectives, suitable references and audio-visual aids, required apparatus and materials, and provides teaching notes for each activity, including comments concerning microbiological techniques.…
ERIC Educational Resources Information Center
Goodman, Mark E.
The mobile audiovisual instructional laboratory has been an effective instrument in bringing audiovisual materials into the inner-city schools of Minneapolis, Minn. In 1969-70, a total of 535 classroom teachers in elementary, secondary, and parochial schools received individual instruction in the production and utilization of audiovisual…
Write Makes Might: A Case for the Neglected Skill.
ERIC Educational Resources Information Center
Duncan, Annelise M.
Of all the language skills, writing is the most difficult challenge for language teachers because students have less experience with written expression. Stimulated by audio-visual materials throughout their lives, students are novices in the discipline of writing. Making writing an ongoing part of foreign language acquisition from the first day in…
Unit: Water, Inspection Pack, National Trial Print.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
The teachers' guide to this unit, prepared for use in grades seven or eight of Australian secondary schools, contains a list of unit objectives, teaching notes on each activity, lists of required apparatus, suggested teacher and student reference materials, and appropriate audio-visual aids. The core of the unit explores the importance of water…
A Unique Testing System for Audio Visual Foreign Language Laboratory.
ERIC Educational Resources Information Center
Stama, Spelios T.
1980-01-01
Described is the design of a low maintenance, foreign language laboratory at Ithaca College, New York, that provides visual and audio instruction, flexibility for testing, and greater student involvement in the lessons. (Author/CS)
NAVA: Tying In to the Information Machine
ERIC Educational Resources Information Center
McIntyre, Joe
1975-01-01
The article describes the types of memberships and the services (conventions, publications, and workshops) of the National Audio-Visual Association (NAVA), a dealer organization, emphasizing their availability and importance to manufacturers and users of audio-visual equipment. (MS)
European Union RACE program contributions to digital audiovisual communications and services
NASA Astrophysics Data System (ADS)
de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric
1995-02-01
The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
Influence of audio triggered emotional attention on video perception
NASA Astrophysics Data System (ADS)
Torres, Freddy; Kalva, Hari
2014-02-01
Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.
Audio-visual sensory deprivation degrades visuo-tactile peri-personal space.
Noel, Jean-Paul; Park, Hyeong-Dong; Pasqualini, Isabella; Lissek, Herve; Wallace, Mark; Blanke, Olaf; Serino, Andrea
2018-05-01
Self-perception is scaffolded upon the integration of multisensory cues on the body, the space surrounding the body (i.e., the peri-personal space; PPS), and from within the body. We asked whether reducing information available from external space would change: PPS, interoceptive accuracy, and self-experience. Twenty participants were exposed to 15 min of audio-visual deprivation and performed: (i) a visuo-tactile interaction task measuring their PPS; (ii) a heartbeat perception task measuring interoceptive accuracy; and (iii) a series of questionnaires related to self-perception and mental illness. These tasks were carried out in two conditions: while exposed to a standard sensory environment and under a condition of audio-visual deprivation. Results suggest that while PPS becomes ill defined after audio-visual deprivation, interoceptive accuracy is unaltered at a group-level, with some participants improving and some worsening in interoceptive accuracy. Interestingly, correlational individual differences analyses revealed that changes in PPS after audio-visual deprivation were related to interoceptive accuracy and self-reports of "unusual experiences" on an individual subject basis. Taken together, the findings argue for a relationship between the malleability of PPS, interoceptive accuracy, and an inclination toward aberrant ideation often associated with mental illness. Copyright © 2018. Published by Elsevier Inc.
Audio-Visual Perception System for a Humanoid Robotic Head
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro
2014-01-01
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593
Preliminary Plans. A Senior High School in the Bailey Hill Area, Eugene, Oregon.
ERIC Educational Resources Information Center
Lutes and Amundson, Architects and Community Planners, Springfield, OR.
The design of this high school is explained by outlining the decision making process used by the architects. The following design criteria form the basis of this process--(1) design for expansion, (2) design for team teaching, (3) organized by function, (4) space for teachers, (5) space for instructional materials, (6) audio-visual communication…
Fear of Falling and Older Adult Peer Production of Audio-Visual Discussion Material
ERIC Educational Resources Information Center
Bailey, Cathy; King, Karen; Dromey, Ben; Wynne, Ciaran
2010-01-01
A growing body of work suggests that negative stereotypes of, and associations between, falling, fear of falling, and ageing, may mean that older adults reject falls information and advice. Against a widely accepted backdrop of demographic ageing in Europe and that alleviating the impacts of falls and fear of falling are pressing health care…
School Librarians as Technology Leaders: An Evolution in Practice
ERIC Educational Resources Information Center
Wine, Lois D.
2016-01-01
The role of school librarians has a history of radical change. School librarians adapted to take on responsibility for technology and audio-visual materials that were introduced in schools in earlier eras. With the advent of the Information Age in the middle of the 20th century and the subsequent development of personal computers and the Internet,…
Garbage Pollution Has a Solution: The Sanitary Landfill.
ERIC Educational Resources Information Center
Andresen, Ruth
The principle ways in which communities solve the growing problems of solid waste disposal are studied in this set of audio-visual materials prepared for grades 6-12. A 58-frame colored filmstrip, cassette tape narration, and teacher's guide focus upon the Monterey Bay area of California. Topics examined range from types of disposal sites, the…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
... distributed alongside the materials and must provide transcripts for all applicable audio/visual works... fiscal year that the applicant operates under (e.g., July 1 through June 30); a program narrative in response to the statement of work, and a budget narrative explaining projected costs. The following forms...
Cluster: Metals. Course: Machine Shop. Research Project.
ERIC Educational Resources Information Center
Sanford - Lee County Schools, NC.
The set of 13 units is designed for use with an instructor in actual machine shop practice and is also keyed to audio visual and textual materials. Each unit contains a series of task packages which: specify prerequisites within the series (minimum is Unit 1); provide a narrative rationale for learning; list both general and specific objectives in…
English Department Midi Course Curriculum for Juniors and Seniors at Norton High School.
ERIC Educational Resources Information Center
Zwicker, Lucille; And Others
This curriculum guide presents syllabi for seventeen ten-week "midi-courses" for juniors and seniors in high school. For each course, the syllabi contain a course description, goals, subject matter, materials, an annotated list of audio-visual aids, a list of behavioral objectives, some suggested activities, a glossary of terms, and a selection of…
ERIC Educational Resources Information Center
Dain, Bernice, Comp.; Nevin, David, Comp.
The present revised and expanded edition of this document is an inclusive cumulation. A few items have been included which are on order as new to the collection or as replacements. This discography is intended to serve primarily as a local user's guide. The call number preceding each entry is based on the Audio-Visual Department's own, unique…
Audio-Visual Communications, A Tool for the Professional
ERIC Educational Resources Information Center
Journal of Environmental Health, 1976
1976-01-01
The manner in which the Cuyahoga County, Ohio Department of Environmental Health utilizes audio-visual presentations for communication with business and industry, professional public health agencies and the general public is presented. Subjects including food sanitation, radiation protection and safety are described. (BT)
7 CFR 47.14 - Prehearing conferences.
Code of Federal Regulations, 2010 CFR
2010-01-01
... determines that conducting the conference by audio-visual telecommunication: (i) Is necessary to prevent prejudice to a party; (ii) Is necessary because of a disability of any individual expected to participate in.... If the examiner determines that a conference conducted by audio-visual telecommunication would...
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
Effects of audio-visual stimulation on the incidence of restraint ulcers on the Wistar rat
NASA Technical Reports Server (NTRS)
Martin, M. S.; Martin, F.; Lambert, R.
1979-01-01
The role of sensory simulation in restrained rats was investigated. Both mixed audio-visual and pure sound stimuli, ineffective in themselves, were found to cause a significant increase in the incidence of restraint ulcers in the Wistar Rat.
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.
Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei
2016-05-01
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.
Audio-Vision: Audio-Visual Interaction in Desktop Multimedia.
ERIC Educational Resources Information Center
Daniels, Lee
Although sophisticated multimedia authoring applications are now available to amateur programmers, the use of audio in of these programs has been inadequate. Due to the lack of research in the use of audio in instruction, there are few resources to assist the multimedia producer in using sound effectively and efficiently. This paper addresses the…
SPACE FOR AUDIO-VISUAL LARGE GROUP INSTRUCTION.
ERIC Educational Resources Information Center
GAUSEWITZ, CARL H.
WITH AN INCREASING INTEREST IN AND UTILIZATION OF AUDIO-VISUAL MEDIA IN EDUCATION FACILITIES, IT IS IMPORTANT THAT STANDARDS ARE ESTABLISHED FOR ESTIMATING THE SPACE REQUIRED FOR VIEWING THESE VARIOUS MEDIA. THIS MONOGRAPH SUGGESTS SUCH STANDARDS FOR VIEWING AREAS, VIEWING ANGLES, SEATING PATTERNS, SCREEN CHARACTERISTICS AND EQUIPMENT PERFORMANCES…
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2016-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2017-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529
ERIC Educational Resources Information Center
Sert, Olcay
2009-01-01
This paper uses a combined methodology to analyse the conversations in supplementary audio-visual materials to be implemented in language teaching classrooms in order to enhance the Interactional Competence (IC) of the learners. Based on a corpus of 90.000 words (Coupling Corpus), the author tries to reveal the potentials of using TV series in …
ERIC Educational Resources Information Center
Rosenfeld, Lawrence B.
This study sought to answer two questions: Do teachers stereotype students of different ethnic and social class backgrounds when using actual classroom evaluative criteria? What are the relative effects of audio and visual cues in eliciting teachers' stereotypes? Stimulus materials portraying students from different ethnic and social class…
ERIC Educational Resources Information Center
Collelldemont, Eulàlia; Vilanou, Conrad
2017-01-01
Revisions of textual and audio-visual materials reveal the educational vision of Spanish anarchists. Through research, we have discovered the importance of aesthetical education and art in general for this protest political party. By studying the three key historical moments of the movement (1868-1939/1901-1910/1910-1936-1939) we have traced the…
ERIC Educational Resources Information Center
Graf, Klaus-D.
We have established an environment for German-Japanese school education projects using real time interactive audio-visual distance learning between remote classrooms. In periods of 8-12 weeks, two classes are dealing with the same subject matter, exchanging materials and results via e-mail and Internet. At 3 or 4 occasions the classes met on…
ERIC Educational Resources Information Center
Renard, Colette; And Others
Principles of the "St. Cloud" audiovisual language instruction methodology based on "Le Francais fondamental" are presented in this guide for teachers. The material concentrates on course content, methodology, and application--including criteria for selection and gradation of course content, a description of the audiovisual and written language…
Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet
NASA Astrophysics Data System (ADS)
Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay
1999-11-01
The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.
The Audio-Visual Marketing Handbook for Independent Schools.
ERIC Educational Resources Information Center
Griffith, Tom
This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…
Audio Visual Technology and the Teaching of Foreign Languages.
ERIC Educational Resources Information Center
Halbig, Michael C.
Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…
ERIC Educational Resources Information Center
ANDERSON, MERLIN
A 1965-66 CONTROLLED EXPERIMENT AT THE FIFTH AND SIXTH GRADE LEVELS WAS CONDUCTED IN SELECTED SMALL SCHOOLS IN SOUTHERN NEVADA TO DETERMINE IF SUCCESSFUL BEGINNING INSTRUCTION IN A FOREIGN LANGUAGE (SPANISH) CAN BE ACHIEVED BY NON-SPECIALIST TEACHERS WITH THE USE OF AUDIO-LINGUAL-VISUAL MATERIALS. INSTRUCTIONAL MATERIALS USED WERE "LA FAMILIA…
Computationally Efficient Clustering of Audio-Visual Meeting Data
NASA Astrophysics Data System (ADS)
Hung, Hayley; Friedland, Gerald; Yeo, Chuohao
This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.
Audio-visual presentation of information for informed consent for participation in clinical trials.
Ryan, R E; Prictor, M J; McLaughlin, K J; Hill, S J
2008-01-23
Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented for example on the Internet, DVD, or video cassette) are one such method. To assess the effects of providing audio-visual information alone, or in conjunction with standard forms of information provision, to potential clinical trial participants in the informed consent process, in terms of their satisfaction, understanding and recall of information about the study, level of anxiety and their decision whether or not to participate. We searched: the Cochrane Consumers and Communication Review Group Specialised Register (searched 20 June 2006); the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 2, 2006; MEDLINE (Ovid) (1966 to June week 1 2006); EMBASE (Ovid) (1988 to 2006 week 24); and other databases. We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. Randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or oral information as usually employed in the particular service setting), with standard forms of information provision alone, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to participate in a real (not hypothetical) clinical study. Two authors independently assessed studies for inclusion and extracted data. Due to heterogeneity no meta-analysis was possible; we present the findings in a narrative review. We included 4 trials involving data from 511 people. Studies were set in the USA and Canada. Three were randomised controlled trials (RCTs) and the fourth a quasi-randomised trial. Their quality was mixed and results should be interpreted with caution. Considerable uncertainty remains about the effects of audio-visual interventions, compared with standard forms of information provision (such as written or oral information normally used in the particular setting), for use in the process of obtaining informed consent for clinical trials. Audio-visual interventions did not consistently increase participants' levels of knowledge/understanding (assessed in four studies), although one study showed better retention of knowledge amongst intervention recipients. An audio-visual intervention may transiently increase people's willingness to participate in trials (one study), but this was not sustained at two to four weeks post-intervention. Perceived worth of the trial did not appear to be influenced by an audio-visual intervention (one study), but another study suggested that the quality of information disclosed may be enhanced by an audio-visual intervention. Many relevant outcomes including harms were not measured. The heterogeneity in results may reflect the differences in intervention design, content and delivery, the populations studied and the diverse methods of outcome assessment in included studies. The value of audio-visual interventions for people considering participating in clinical trials remains unclear. Evidence is mixed as to whether audio-visual interventions enhance people's knowledge of the trial they are considering entering, and/or the health condition the trial is designed to address; one study showed improved retention of knowledge amongst intervention recipients. The intervention may also have small positive effects on the quality of information disclosed, and may increase willingness to participate in the short-term; however the evidence is weak. There were no data for several primary outcomes, including harms. In the absence of clear results, triallists should continue to explore innovative methods of providing information to potential trial participants. Further research should take the form of high-quality randomised controlled trials, with clear reporting of methods. Studies should conduct content assessment of audio-visual and other innovative interventions for people of differing levels of understanding and education; also for different age and cultural groups. Researchers should assess systematically the effects of different intervention components and delivery characteristics, and should involve consumers in intervention development. Studies should assess additional outcomes relevant to individuals' decisional capacity, using validated tools, including satisfaction; anxiety; and adherence to the subsequent trial protocol.
Visual Image Sensor Organ Replacement: Implementation
NASA Technical Reports Server (NTRS)
Maluf, A. David (Inventor)
2011-01-01
Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.
ERIC Educational Resources Information Center
Wang, Pei-Yu; Huang, Chung-Kai
2015-01-01
This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…
A scheme for racquet sports video analysis with the combination of audio-visual information
NASA Astrophysics Data System (ADS)
Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua
2005-07-01
As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.
Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M
2017-11-01
The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Utterance independent bimodal emotion recognition in spontaneous communication
NASA Astrophysics Data System (ADS)
Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng
2011-12-01
Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
The use of ambient audio to increase safety and immersion in location-based games
NASA Astrophysics Data System (ADS)
Kurczak, John Jason
The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a visual interface, while players performed signifi- cantly better at the game tasks when using the visual interface. This makes ambient audio a legitimate alternative to visual interfaces for mobile users when safety and immersion are a priority.
Audio-Visual Speech Perception Is Special
ERIC Educational Resources Information Center
Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.
2005-01-01
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…
Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt
ERIC Educational Resources Information Center
Olube, Friday K.
2015-01-01
The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…
ERIC Educational Resources Information Center
Rozga, Agata; King, Tricia Z.; Vuduc, Richard W.; Robins, Diana L.
2013-01-01
We examined facial electromyography (fEMG) activity to dynamic, audio-visual emotional displays in individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. Participants viewed clips of happy, angry, and fearful displays that contained both facial expression and affective prosody while surface electrodes measured…
Technical Considerations in the Delivery of Audio-Visual Course Content.
ERIC Educational Resources Information Center
Lightfoot, Jay M.
2002-01-01
In an attempt to provide students with the benefit of the latest technology, some instructors include multimedia content on their class Web sites. This article introduces the basic terms and concepts needed to understand the multimedia domain. Provides a brief tutorial designed to help instructors create good, consistent audio-visual content. (AEF)
Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues
ERIC Educational Resources Information Center
Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.
2009-01-01
Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…
Infant Perception of Audio-Visual Speech Synchrony
ERIC Educational Resources Information Center
Lewkowicz, David J.
2010-01-01
Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…
The Changing Role of the Educational Video in Higher Distance Education
ERIC Educational Resources Information Center
Laaser, Wolfram; Toloza, Eduardo A.
2017-01-01
The article argues that the ongoing usage of audio visual media is falling behind in terms of educational quality compared to prior achievements in the history of distance education. After reviewing some important steps and experiences of audio visual digital media development, we analyse predominant presentation formats on the Web. Special focus…
THE JAMES MADISON WOOD QUADRANGLE, STEPHENS COLLEGE, COLUMBIA, MISSOURI.
ERIC Educational Resources Information Center
MCBRIDE, WILMA
THE JAMES MADISON WOOD QUADRANGLE AT STEPHENS COLLEGE IS A COMPLEX OF BUILDINGS DESIGNED TO MAKE POSSIBLE A FLEXIBLE EDUCATIONAL ENVIRONMENT. A LIBRARY HOUSES A GREAT VARIETY OF AUDIO-VISUAL RESOURCES AND BOOKS. A COMMUNICATION CENTER INCORPORATES TELEVISION AND RADIO FACILITIES, A FILM PRODUCTION STUDIO, AND AUDIO-VISUAL FACILITIES. THE LEARNING…
Department of Defense In-House RDT&E Activities
1982-10-30
AND LARGE NO TESTS AT ANY ONE TIME.SEVERAL VEH TESI COURSES AND EXTENSIVE CROSS COUNTRY TERRAIN RANGES %"" ARE AVAILABLE.500,000 ACRE ISOLATED IMPACT...TREATMENT AND PREVENTION METABOLISM AND NUTRITIONAL EFFECTS OF BURN INJURY IN SOLDIERS INFECTION AND MICROBIOLOGIC SURVEILLANCE OF TROOPS WITH THERMAL...ELECTRONICS, HUMAN FACTORS, CHEMICAL, MICROBIOLOGICAL , MATERIALS, SOILS, AUDIO-VISUAL, AND DATA ANALYSIS. OTHER TEST RESOURCES CONSIST OF FIRING
ERIC Educational Resources Information Center
Bryce, C. F. A.; Stewart, A. M.
A brief review of the characteristics of computer assisted instruction and the attributes of audiovisual media introduces this report on a project designed to improve the effectiveness of computer assisted learning through the incorporation of audiovisual materials. A discussion of the implications of research findings on the design and layout of…
ERIC Educational Resources Information Center
Smith, Anita P.; And Others
In a project designed to train customer service personnel in improved methods of assisting the physically disabled, audio-visual training materials were developed and presented during 2-week courses involving 1,058 employees at transportation, hotel/restaurant, and entertainment centers in 25 cities. The participants judged the training program…
ERIC Educational Resources Information Center
Peters, Richard
Students must clearly understand that every living thing on earth exists within the context of a system of interlocking dependency. Through the use of audio-visual materials, books, magazines, newspapers, and special television reports, as well as direct interaction with people, places, and things, students begin to develop a cognitive frame of…
The Audio-Visual Equipment Directory. Seventeenth Edition.
ERIC Educational Resources Information Center
Herickes, Sally, Ed.
The following types of audiovisual equipment are catalogued: 8 mm. and 16 mm. motion picture projectors, filmstrip and sound filmstrip projectors, slide projectors, random access projection equipment, opaque, overhead, and micro-projectors, record players, special purpose projection equipment, audio tape recorders and players, audio tape…
ERIC Educational Resources Information Center
Baldwin, Thomas F.
Man seems unable to retain different information from different senses or channels simultaneously; one channel gains full attention. However, it is hypothesized that if the message elements arriving simultaneously from audio and visual channels are redundant, man will retain the information. An attempt was made to measure redundancy in the audio…
The Effects of an Audio-Visual Training Program in Dyslexic Children
ERIC Educational Resources Information Center
Magnan, Annie; Ecalle, Jean; Veuillet, Evelyne; Collet, Lionel
2004-01-01
A research project was conducted in order to investigate the usefulness of intensive audio-visual training administered to children with dyslexia involving daily voicing exercises. In this study, the children received such voicing training (experimental group) for 30 min a day, 4 days a week, over 5 weeks. They were assessed on a reading task…
Tracing Trajectories of Audio-Visual Learning in the Infant Brain
ERIC Educational Resources Information Center
Kersey, Alyssa J.; Emberson, Lauren L.
2017-01-01
Although infants begin learning about their environment before they are born, little is known about how the infant brain changes during learning. Here, we take the initial steps in documenting how the neural responses in the brain change as infants learn to associate audio and visual stimuli. Using functional near-infrared spectroscopy (fNRIS) to…
Creative Description: Audio Describing Artistic Films for Individuals with Visual Impairments
ERIC Educational Resources Information Center
Walczak, Agnieszka
2017-01-01
Audio description is a service aimed at widening accessibility to visual media such as film and television for all individuals, especially for people with sensory disabilities. It offers people who are blind or have low vision "a verbal screen onto the world" (Di´az Cintas, Orero, & Remael, 2007, p. 13). The standard rule when…
Audio-Visual Aid in Teaching "Fatty Liver"
ERIC Educational Resources Information Center
Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha
2016-01-01
Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…
Constructing a Streaming Video-Based Learning Forum for Collaborative Learning
ERIC Educational Resources Information Center
Chang, Chih-Kai
2004-01-01
As web-based courses using videos have become popular in recent years, the issue of managing audio-visual aids has become pertinent. Generally, the contents of audio-visual aids may include a lecture, an interview, a report, or an experiment, which may be transformed into a streaming format capable of making the quality of Internet-based videos…
ERIC Educational Resources Information Center
Choi-Lundberg, Derek L.; Cuellar, William A.; Williams, Anne-Marie M.
2016-01-01
In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection…
The Audio-Visual Equipment Directory; Twenty-Second Edition, 1976-1977.
ERIC Educational Resources Information Center
Herickes, Sally, Ed.
Over 2,000 currently available items are listed in the 1976-1977 Audio-Visual Equipment Directory with specifications on price, model, weight, capacity, accessories, and technical details. Charts for screen size, lists of film and tape running times, an index to industry trade names, and a directory of equipment manufacturers are also provided.…
Virtual environment display for a 3D audio room simulation
NASA Technical Reports Server (NTRS)
Chapin, William L.; Foster, Scott H.
1992-01-01
The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.
Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Young children's recall and reconstruction of audio and audiovisual narratives.
Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C
1986-08-01
It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.
7 CFR 1.168 - Procedure for hearing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... file with the Hearing Clerk a notice stating whether the hearing will be conducted by telephone, audio... personal attendance of any individual expected to attend the hearing rather than by audio-visual... basis for the motion and the circumstances that require the hearing to be conducted other than by audio...
Automatic Detection and Classification of Audio Events for Road Surveillance Applications.
Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine
2018-06-06
This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.
ERIC Educational Resources Information Center
Sayles, Ellen L.
A study was made to find out the average amount of time that teachers in South Bend, Indiana spent designing audiovisual aids and to determine their awareness of the availability of audiovisual production classes. A questionnaire was sent to 30% of the teachers of grades 1-6 asking the amount of time they normally spent producing audiovisual…
Detection of emetic activity in the cat by monitoring venous pressure and audio signals
NASA Technical Reports Server (NTRS)
Nagahara, A.; Fox, Robert A.; Daunton, Nancy G.; Elfar, S.
1991-01-01
To investigate the use of audio signals as a simple, noninvasive measure of emetic activity, the relationship between the somatic events and sounds associated with retching and vomiting was studied. Thoracic venous pressure obtained from an implanted external jugular catheter was shown to provide a precise measure of the somatic events associated with retching and vomiting. Changes in thoracic venous pressure monitored through an indwelling external jugular catheter with audio signals, obtained from a microphone located above the animal in a test chamber, were compared. In addition, two independent observers visually monitored emetic episodes. Retching and vomiting were induced by injection of xylazine (0.66mg/kg s.c.), or by motion. A unique audio signal at a frequency of approximately 250 Hz is produced at the time of the negative thoracic venous pressure change associated with retching. Sounds with higher frequencies (around 2500 Hz) occur in conjunction with the positive pressure changes associated with vomiting. These specific signals could be discriminated reliably by individuals reviewing the audio recordings of the sessions. Retching and those emetic episodes associated with positive venous pressure changes were detected accurately by audio monitoring, with 90 percent of retches and 100 percent of emetic episodes correctly identified. Retching was detected more accurately (p is less than .05) by audio monitoring than by direct visual observation. However, with visual observation a few incidents in which stomach contents were expelled in the absence of positive pressure changes or detectable sounds were identified. These data suggest that in emetic situations, the expulsion of stomach contents may be accomplished by more than one neuromuscular system and that audio signals can be used to detect emetic episodes associated with thoracic venous pressure changes.
Talker variability in audio-visual speech perception
Heald, Shannon L. M.; Nusbaum, Howard C.
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919
Talker variability in audio-visual speech perception.
Heald, Shannon L M; Nusbaum, Howard C
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.
NASA Astrophysics Data System (ADS)
George, Rohini
Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution function could be approximated to a normal distribution function. A statistical analysis was also performed to investigate if a patient's physical, tumor or general characteristics played a role in identifying whether he/she responded positively to the coaching type---signified by a reduction in the variability of respiratory motion. The analysis demonstrated that, although there were some characteristics like disease type and dose per fraction that were significant with respect to time-independent analysis, there were no significant time trends observed for the inter-session or intra-session analysis. Based on patient feedback with the existing audio-visual biofeedback system used for the study and research performed on other feedback systems, an improved audio-visual biofeedback system was designed. It is hoped the widespread clinical implementation of audio-visual biofeedback for radiotherapy will improve the accuracy of lung cancer radiotherapy.
ERIC Educational Resources Information Center
Celasin, Cenk
2013-01-01
In this qualitative study, musical elements in mass media and internet mostly intended to children and adolescents, were examined in the context of the dimensions of the social development of them in a general approach, through scientific literature and written, audio, visual and audio-visual documents regarding mass media and internet. Purpose of…
Perception of Emotion: Differences in Mode of Presentation, Sex of Perceiver, and Race of Expressor.
ERIC Educational Resources Information Center
Kozel, Nicholas J.; Gitter, A. George
A 2 x 2 x 4 factorial design was utilized to investigate the effects of sex of perceiver, race of expressor (Negro and White), and mode of presentation of stimuli (audio and visual, visual only, audio only, and still pictures) on perception of emotion (POE). Perception of seven emotions (anger, happiness, surprise, fear, disgust, pain, and…
The Textalk, A Uniquely Simple, Versatile Type of Audio-Visual Module: How to Prepare and Use It.
ERIC Educational Resources Information Center
Thomas, D. Des S.; Habowsky, J. E. J.
Textbooks emphasizing visual elements in exposition can be enriched using crisp, concise, audio-taped commentaries to focus attention on essential points in each illustration. These text aids, packaged in the convenient form of cassettes (usually one per chapter), have a number of obvious advantages: (1) any teacher can prepare them; (2) they are…
ERIC Educational Resources Information Center
Sediyani, Tri; Yufiarti; Hadi, Eko
2017-01-01
This study aims to develop a model of learning by integrating multimedia and audio-visual self-reflective learners. This multimedia was developed as a tool for prospective teachers as learners in the education of children with special needs to reflect on their teaching competencies before entering the world of education. Research methods to…
2006-10-01
Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents. In Visualising Network...University at the start of each fall semester, when numerous new students arrive on campus and begin downloading extensive amounts of audio and...SIGGRAPH ’92 • C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V. Kenyon and J.C. Hart, "The CAVE: Audio Visual Experience Automatic Virtual Environment
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2011 CFR
2011-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2012 CFR
2012-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2013 CFR
2013-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2010 CFR
2010-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2014 CFR
2014-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
A sLORETA study for gaze-independent BCI speller.
Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming
2017-07-01
EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.
Designing sound and visual components for enhancement of urban soundscapes.
Hong, Joo Young; Jeon, Jin Yong
2013-09-01
The aim of this study is to investigate the effect of audio-visual components on environmental quality to improve soundscape. Natural sounds with road traffic noise and visual components in urban streets were evaluated through laboratory experiments. Waterfall and stream water sounds, as well as bird sounds, were selected to enhance the soundscape. Sixteen photomontages of a streetscape were constructed in combination with two types of water features and three types of vegetation which were chosen as positive visual components. The experiments consisted of audio-only, visual-only, and audio-visual conditions. The preferences and environmental qualities of the stimuli were evaluated by a numerical scale and 12 pairs of adjectives, respectively. The results showed that bird sounds were the most preferred among the natural sounds, while the sound of falling water was found to degrade the soundscape quality when the road traffic noise level was high. The visual effects of vegetation on aesthetic preference were significant, but those of water features relatively small. It was revealed that the perceptual dimensions of the environment were different from the noise levels. Particularly, the acoustic comfort factor related to soundscape quality considerably influenced preference for the overall environment at a higher level of road traffic noise.
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
Liu, Hong; Zhang, Gaoyan; Liu, Baolin
2017-04-01
In the Chinese language, a polyphone is a kind of special character that has more than one pronunciation, with each pronunciation corresponding to a different meaning. Here, we aimed to reveal the cognitive processing of audio-visual information integration of polyphones in a sentence context using the event-related potential (ERP) method. Sentences ending with polyphones were presented to subjects simultaneously in both an auditory and a visual modality. Four experimental conditions were set in which the visual presentations were the same, but the pronunciations of the polyphones were: the correct pronunciation; another pronunciation of the polyphone; a semantically appropriate pronunciation but not the pronunciation of the polyphone; or a semantically inappropriate pronunciation but also not the pronunciation of the polyphone. The behavioral results demonstrated significant differences in response accuracies when judging the semantic meanings of the audio-visual sentences, which reflected the different demands on cognitive resources. The ERP results showed that in the early stage, abnormal pronunciations were represented by the amplitude of the P200 component. Interestingly, because the phonological information mediated access to the lexical semantics, the amplitude and latency of the N400 component changed linearly across conditions, which may reflect the gradually increased semantic mismatch in the four conditions when integrating the auditory pronunciation with the visual information. Moreover, the amplitude of the late positive shift (LPS) showed a significant correlation with the behavioral response accuracies, demonstrating that the LPS component reveals the demand of cognitive resources for monitoring and resolving semantic conflicts when integrating the audio-visual information.
ERIC Educational Resources Information Center
Frankel, Lois; Brownstein, Beth; Soiffer, Neil
2017-01-01
This report describes the pilot conducted in the final phase of a project, Expanding Audio Access to Mathematics Expressions by Students With Visual Impairments via MathML, to provide easy-to-use tools for authoring and rendering secondary-school algebra-level math expressions in synthesized speech that is useful for students with blindness or low…
ERIC Educational Resources Information Center
Lewkowicz, David J.
2003-01-01
Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…
ERIC Educational Resources Information Center
Caldera-Serrano, Jorge
2008-01-01
This article attempts to offer an overview of the current changes that are being experienced in the management of audio-visual documentation and those that can be forecast in the future as a result of the migration from analogue to digital information. For this purpose the documentary chain will be used as a basis to analyse individually the tasks…
ERIC Educational Resources Information Center
Ibáñez Moreno, Ana; Vermeulen, Anna; Jordano, Maria
2016-01-01
During the last decades of the 20th century, audiovisual products began to be audio described in order to make them accessible to blind and visually impaired people (Benecke, 2004). This means that visual information is orally described in the gaps between dialogues. In order to meet the wishes of the so-called On Demand (OD) generation that wants…
ERIC Educational Resources Information Center
Avance, Lyonel D.; Carr, Dorothy B.
Presented is the final report of a project to develop and field test audio and visual media to accompany developmentally sequenced activities appropriate for a physical education program for handicapped children from preschool through high school. Brief sections cover the following: the purposes and accomplishments of the project; the population…
Engel, Annerose; Bangert, Marc; Horbank, David; Hijmans, Brenda S; Wilkens, Katharina; Keller, Peter E; Keysers, Christian
2012-11-01
To investigate the cross-modal transfer of movement patterns necessary to perform melodies on the piano, 22 non-musicians learned to play short sequences on a piano keyboard by (1) merely listening and replaying (vision of own fingers occluded) or (2) merely observing silent finger movements and replaying (on a silent keyboard). After training, participants recognized with above chance accuracy (1) audio-motor learned sequences upon visual presentation (89±17%), and (2) visuo-motor learned sequences upon auditory presentation (77±22%). The recognition rates for visual presentation significantly exceeded those for auditory presentation (p<.05). fMRI revealed that observing finger movements corresponding to audio-motor trained melodies is associated with stronger activation in the left rolandic operculum than observing untrained sequences. This region was also involved in silent execution of sequences, suggesting that a link to motor representations may play a role in cross-modal transfer from audio-motor training condition to visual recognition. No significant differences in brain activity were found during listening to visuo-motor trained compared to untrained melodies. Cross-modal transfer was stronger from the audio-motor training condition to visual recognition and this is discussed in relation to the fact that non-musicians are familiar with how their finger movements look (motor-to-vision transformation), but not with how they sound on a piano (motor-to-sound transformation). Copyright © 2012 Elsevier Inc. All rights reserved.
Alm, Magnus; Behne, Dawn
2015-01-01
Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274
Deshpande, Sushmita; Rajpurohit, Ladusingh; Kokka, Vivian Varghese
2017-01-01
Background: Visually impaired people encounter numerous challenges in their daily life which makes it a cumbersome task to pay special attention to oral health needs. Furthermore, there is little knowledge about oral health practices among caretakers and visually impaired individuals, due to which oral health is often neglected when compared to the general health. Hence, there was a need to educate visually challenged individuals about oral hygiene practices in a customized format so that the comprehension of brushing techniques could be conveyed at its best. Materials and Methods: The present study was a randomized control trial of sixty visually impaired adolescents who were divided into three groups of 20 each. In Group 1, Braille was used, whereas in Group 2, audio-tactile performance (ATP) technique and in Group 3, a combination of both the methods were used to teach tooth brushing as a part of oral health education. Pre- and post-plaque index score using Silness and Loe (1967) after health education were calculated and tabulated for statistical analysis. Results: The postintervention mean plaque index score increased in Group 1 from 29.45 to 42.98, whereas the mean plaque score decreased in Groups 2 and 3 from 30.83–29.9 to 30.23–18.73, respectively. Intergroup comparison of postplaque index score using Kruskal–Wallis and ANOVA analysis showed significant difference among all three study groups. Conclusion: The combination of Braille and ATP technique of health education served as the most effective medium to teach oral hygiene methods to visually impaired adolescents. PMID:29386797
Arctic audiology: trials, tribulations, and occasional successes.
Ilecki, H J; Baxter, J D
1981-08-01
Efforts to provide audiologic services to the Inuit population of the Baffin Zone have frequently resulted in unexpected problems and frustrations. Often these are the product of two very different cultures coming together. This paper reviews the author's personal experiences in this regard with special reference to hearing aid attitudes, noise exposure, and the making of instructional audio-visual materials. A brief description of a recently revised hearing conservation program at Frobisher Bay is provided.
ERIC Educational Resources Information Center
Edwards, Ronald K., And Others
This study dealt with two skill courses, business machines, and beginning typewriting. The control groups received instruction in the traditional method. The experimental groups attended open laboratory at any time convenient to them to receive their instruction. The groups were compared on the basis of identical performance tests. Materials to…
ERIC Educational Resources Information Center
Sewell, Edward H., Jr.
This study investigates the effects of cartoon illustrations on female and male college student comprehension and evaluation of information presented in several combinations of print, audio, and visual formats. Subjects were assigned to one of five treatment groups: printed text, printed text with cartoons, audiovisual presentations, audio only…
Podcasting by Synchronising PowerPoint and Voice: What Are the Pedagogical Benefits?
ERIC Educational Resources Information Center
Griffin, Darren K.; Mitchell, David; Thompson, Simon J.
2009-01-01
The purpose of this study was to investigate the efficacy of audio-visual synchrony in podcasting and its possible pedagogical benefits. "Synchrony" in this study refers to the simultaneous playback of audio and video data streams, so that the transitions between presentation slides occur at "lecturer chosen" points in the audio commentary.…
ERIC Educational Resources Information Center
Anderson, Gerald D.; Olson, David B.
The document presents the transcript of the audio narrative portion of approximately 100 interviews with first and second generation Scandinavian immigrants to the United States. The document is intended for use by secondary school classroom teachers as they develop and implement educational programs related to the Scandinavian heritage in…
Code of Federal Regulations, 2012 CFR
2012-10-01
... alerters shall provide an audio alarm upon expiration of the timing cycle interval. An alerter on a... indication to the operator at least five seconds prior to an audio alarm. The visual indication on an alerter...
The effect of context and audio-visual modality on emotions elicited by a musical performance
Coutinho, Eduardo; Scherer, Klaus R.
2016-01-01
In this work, we compared emotions induced by the same performance of Schubert Lieder during a live concert and in a laboratory viewing/listening setting to determine the extent to which laboratory research on affective reactions to music approximates real listening conditions in dedicated performances. We measured emotions experienced by volunteer members of an audience that attended a Lieder recital in a church (Context 1) and emotional reactions to an audio-video-recording of the same performance in a university lecture hall (Context 2). Three groups of participants were exposed to three presentation versions in Context 2: (1) an audio-visual recording, (2) an audio-only recording, and (3) a video-only recording. Participants achieved statistically higher levels of emotional convergence in the live performance than in the laboratory context, and the experience of particular emotions was determined by complex interactions between auditory and visual cues in the performance. This study demonstrates the contribution of the performance setting and the performers’ appearance and nonverbal expression to emotion induction by music, encouraging further systematic research into the factors involved. PMID:28781419
The nature of sound and vision in relation to colour
NASA Astrophysics Data System (ADS)
Greated, Marianne
2011-03-01
The increasing role of sound within the visual arts context and the trend in postmodernism towards interdisciplinary artworks has demanded a heightened awareness of the audio-visual. This paper explores some of the fundamental physical properties of both sound and colour, their similarities and differences and how the audio and visual senses are related. Ways in which soundscapes have been combined with paintings in exhibitions by the author will be used to illustrate how the two media can be combined to enhance the overall artistic experience.
Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I
2017-06-01
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.
Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.
2012-01-01
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585
ERIC Educational Resources Information Center
Jongbloed, Harry J. L.
As the fourth part of a comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded study reports on the African countries of Cameroun, Republic of Central Africa, Dahomey, Gabon, Ghana, Kenya, Libya, Mali, Nigeria, Rwanda, Senegal, Swaziland, Tunisia, Upper Volta and Zambia. Information…
Hamdan, Jihad M; Al-Hawamdeh, Rose Fowler
2018-04-10
This empirical study examines the extent to which 'face', i.e. (audio visual dialogues), affects the listening comprehension of advanced Jordanian EFL learners in a TOFEL-like test, as opposed to its absence (i.e. a purely audio test) which is the current norm in many English language proficiency tests, including but not limited to TOFEL iBT, TOEIC and academic IELTS. Through an online experiment, 60 Jordanian postgraduate linguistics and English literature students (advanced EFL learners) at the University of Jordan sit for two listening tests (simulating English proficiency tests); namely, one which is purely audio [i.e. without any face (including any visuals such as motion, as well as still pictures)], and one which is audiovisual/video. The results clearly show that the inclusion of visuals enhances subjects' performance in listening tests. It is concluded that since the aim of English proficiency tests such as TOEFL iBT is to qualify or disqualify subjects to work and study in western English-speaking countries, the exclusion of visuals is unfounded. In actuality, most natural interaction includes visibility of the interlocutors involved, and hence test takers who sit purely audio proficiency tests in English or any other language are placed at a disadvantage.
Let Their Voices Be Heard! Building a Multicultural Audio Collection.
ERIC Educational Resources Information Center
Tucker, Judith Cook
1992-01-01
Discusses building a multicultural audio collection for a library. Gives some guidelines about selecting materials that really represent different cultures. Audio materials that are considered fall roughly into the categories of children's stories, didactic materials, oral histories, poetry and folktales, and music. The goal is an authentic…
Effect of Divided Attention on Children's Rhythmic Response
ERIC Educational Resources Information Center
Thomas, Jerry R.; Stratton, Richard K.
1977-01-01
Audio and visual interference did not significantly impair rhythmic response levels of second- and fourth-grade boys as measured by space error scores, though audio input resulted in significantly less consistent temporal performance. (MB)
Xiao, Y; MacKenzie, C; Orasanu, J; Spencer, R; Rahman, A; Gunawardane, V
1999-01-01
To determine what information sources are used during a remote diagnosis task. Experienced trauma care providers viewed segments of videotaped initial trauma patient resuscitation and airway management. Experiment 1 collected responses from anesthesiologists to probing questions during and after the presentation of recorded video materials. Experiment 2 collected the responses from three types of care providers (anesthesiologists, nurses, and surgeons). Written and verbal responses were scored according to detection of critical events in video materials and categorized according to their content. Experiment 3 collected visual scanning data using an eyetracker during the viewing of recorded video materials from the three types of care providers. Eye-gaze data were analyzed in terms of focus on various parts of the videotaped materials. Care providers were found to be unable to detect several critical events. The three groups of subjects studied (anesthesiologists, nurses, and surgeons) focused on different aspects of videotaped materials. When the remote events and activities are multidisciplinary and rapidly changing, experts linked with audio-video-data connections may encounter difficulties in comprehending remote activities, and their information usage may be biased. Special training is needed for the remote decision-maker to appreciate tasks outside his or her speciality and beyond the boundaries of traditional divisions of labor.
Podcasting for Online Learners with Vision Loss: A Descriptive Study
ERIC Educational Resources Information Center
Whetstone, Kimarie W.
2013-01-01
The current uses of audio podcasts, the accessibility of audio podcasts, and the benefits of using audio podcasts in U.S. online college courses as a form of access to visual course content that would be otherwise unavailable to learners with vision loss had not been examined and described. To provide instructional designers with a firm basis for…
Description of Audio-Visual Recording Equipment and Method of Installation for Pilot Training.
ERIC Educational Resources Information Center
Neese, James A.
The Audio-Video Recorder System was developed to evaluate the effectiveness of in-flight audio/video recording as a pilot training technique for the U.S. Air Force Pilot Training Program. It will be used to gather background and performance data for an experimental program. A detailed description of the system is presented and construction and…
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Acoustic Calibration of the Exterior Effects Room at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Faller, Kenneth J., II; Rizzi, Stephen A.; Klos, Jacob; Chapin, William L.; Surucu, Fahri; Aumann, Aric R.
2010-01-01
The Exterior Effects Room (EER) at the NASA Langley Research Center is a 39-seat auditorium built for psychoacoustic studies of aircraft community noise. The original reproduction system employed monaural playback and hence lacked sound localization capability. In an effort to more closely recreate field test conditions, a significant upgrade was undertaken to allow simulation of a three-dimensional audio and visual environment. The 3D audio system consists of 27 mid and high frequency satellite speakers and 4 subwoofers, driven by a real-time audio server running an implementation of Vector Base Amplitude Panning. The audio server is part of a larger simulation system, which controls the audio and visual presentation of recorded and synthesized aircraft flyovers. The focus of this work is on the calibration of the 3D audio system, including gains used in the amplitude panning algorithm, speaker equalization, and absolute gain control. Because the speakers are installed in an irregularly shaped room, the speaker equalization includes time delay and gain compensation due to different mounting distances from the focal point, filtering for color compensation due to different installations (half space, corner, baffled/unbaffled), and cross-over filtering.
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Visual communication and the content and style of conversation.
Rutter, D R; Stephenson, G M; Dewey, M E
1981-02-01
Previous research suggests that visual communication plays a number of important roles in social interaction. In particular, it appears to influence the content of what people say in discussions, the style of their speech, and the outcomes they reach. However, the findings are based exclusively on comparisons between face-to-face conversations and audio conversations, in which subjects sit in separate rooms and speak over a microphone-headphone intercom which precludes visual communication. Interpretation is difficult, because visual communication is confounded with physical presence, which itself makes available certain cues denied to audio subjects. The purpose of this paper is to report two experiments in which the variables were separated and content and style were re-examined. The first made use of blind subjects, and again compared the face-to-face and audio conditions. The second returned to sighted subjects, and examined four experimental conditions: face-to-face; audio; a curtain condition in which subjects sat in the same room but without visual communication; and a video condition in which they sat in separate rooms and communicated over a television link. Neither visual communication nor physical presence proved to be critical variable. Instead, the two sources of cues combined, such that content and style were influenced by the aggregate of available cues. The more cueless the settings, the more task-oriented, depersonalized and unspontaneous the conversation. The findings also suggested that the primary effect of cuelessness is to influence verbal content, and that its influence on both style and outcome occurs indirectly, through the mediation of content.
Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning
2011-11-14
In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Riley, D. R.; Miller, G. K., Jr.
1978-01-01
The effect of time delay was determined in the visual and motion cues in a flight simulator on pilot performance in tracking a target aircraft that was oscillating sinusoidally in altitude only. An audio side task was used to assure the subject was fully occupied at all times. The results indicate that, within the test grid employed, about the same acceptable time delay (250 msec) was obtained for a single aircraft (fighter type) by each of two subjects for both fixed-base and motion-base conditions. Acceptable time delay is defined as the largest amount of delay that can be inserted simultaneously into the visual and motion cues before performance degradation occurs. A statistical analysis of the data was made to establish this value of time delay. Audio side task provided quantitative data that documented the subject's work level.
Audio-visual communication and its use in palliative care.
Coyle, Nessa; Khojainova, Natalia; Francavilla, John M; Gonzales, Gilbert R
2002-02-01
The technology of telemedicine has been used for over 20 years, involving different areas of medicine, providing medical care for the geographically isolated patients, and uniting geographically isolated clinicians. Today audio-visual technology may be useful in palliative care for the patients lacking access to medical services due to the medical condition rather than geographic isolation. We report results of a three-month trial of using audio-visual communications as a complementary tool in care for a complex palliative care patient. Benefits of this system to the patient included 1) a daily limited physical examination, 2) screening for a need for a clinical visit or admission, 3) lip reading by the deaf patient, 4) satisfaction by the patient and the caregivers with this form of communication as a complement to telephone communication. A brief overview of the historical prospective on telemedicine and a listing of applied telemedicine programs are provided.
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
Robust audio-visual speech recognition under noisy audio-video conditions.
Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji
2014-02-01
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
Reduced efficiency of audiovisual integration for nonnative speech.
Yi, Han-Gyol; Phelps, Jasmine E B; Smiljanic, Rajka; Chandrasekaran, Bharath
2013-11-01
The role of visual cues in native listeners' perception of speech produced by nonnative speakers has not been extensively studied. Native perception of English sentences produced by native English and Korean speakers in audio-only and audiovisual conditions was examined. Korean speakers were rated as more accented in audiovisual than in the audio-only condition. Visual cues enhanced word intelligibility for native English speech but less so for Korean-accented speech. Reduced intelligibility of Korean-accented audiovisual speech was associated with implicit visual biases, suggesting that listener-related factors partially influence the efficiency of audiovisual integration for nonnative speech perception.
A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.; Shrum, Richard; Miller, Joel; Null, Cynthia H. (Technical Monitor)
1996-01-01
Our work in virtual reality systems at NASA Ames Research Center includes the area of aurally-guided visual search, using specially-designed audio cues and spatial audio processing (also known as virtual or "3-D audio") techniques (Begault, 1994). Previous studies at Ames had revealed that use of 3-D audio for Traffic Collision Avoidance System (TCAS) advisories significantly reduced head-down time, compared to a head-down map display (0.5 sec advantage) or no display at all (2.2 sec advantage) (Begault, 1993, 1995; Begault & Pittman, 1994; see Wenzel, 1994, for an audio demo). Since the crew must keep their head up and looking out the window as much as possible when taxiing under low-visibility conditions, and the potential for "blunder" is increased under such conditions, it was sensible to evaluate the audio spatial cueing for a prototype audio ground collision avoidance warning (GCAW) system, and a 3-D audio guidance system. Results were favorable for GCAW, but not for the audio guidance system.
Automatic summarization of soccer highlights using audio-visual descriptors.
Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc
2015-01-01
Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.
2017-11-01
ARL-TR-8205 ● NOV 2017 US Army Research Laboratory Strategies for Characterizing the Sensory Environment: Objective and...Subjective Evaluation Methods using the VisiSonic Real Space 64/5 Audio-Visual Panoramic Camera By Joseph McArdle, Ashley Foots, Chris Stachowiak, and...return it to the originator. ARL-TR-8205 ● NOV 2017 US Army Research Laboratory Strategies for Characterizing the Sensory
Hearing gestures, seeing music: vision influences perceived tone duration.
Schutz, Michael; Lipscomb, Scott
2007-01-01
Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world-renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli. Participants were informed of the mismatch, and asked to rate note duration of these audio-visual pairs based on sound alone. Ratings varied based on visual (Lv versus Sv), but not auditory (La versus Sa) components. Therefore while longer gestures do not make longer notes, longer gestures make longer sounding notes through the integration of sensory information. This finding contradicts previous research showing that audition dominates temporal tasks such as duration judgment.
PDF Lecture Materials for Online and ``Flipped'' Format Astronomy Courses
NASA Astrophysics Data System (ADS)
Kary, D. M.; Eisberg, J.
2013-04-01
Online astronomy courses typically rely on students reading the textbook and/or a set of text-based lecture notes to replace the “lecture” material. However, many of our students report that this is much less engaging than in-person lectures, especially given the amount of interactive work such as “think-pair-share” problems done in many astronomy classes. Students have similarly criticized direct lecture-capture. To address this, we have developed a set of PowerPoint-style presentations with embedded lecture audio combined with prompts for student interaction including think-pair-share questions. These are formatted PDF packages that can be used on a range of different computers using free software. The presentations are first developed using Microsoft PowerPoint software. Audio recordings of scripted lectures are then synchronized with the presentations and the entire package is converted to PDF using Adobe Presenter. This approach combines the ease of editing that PowerPoint provides along with the platform-independence of PDF. It's easy to add, remove, or edit individual slides as needed, and PowerPoint supports internal links so that think-pair-share questions can be inserted with links to feedback based on the answers selected. Modern PDF files support animated visuals with synchronized audio and they can be read using widely available free software. Using these files students in an online course can get many of the benefits of seeing and hearing the course material presented in an in-person lecture format. Students needing extra help in traditional lecture classes can use these presentations to help review the materials covered in lecture. Finally, the presentations can be used in a “flipped” format in which students work through the presentations outside of class time while spending the “lecture” time on in-class interaction.
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice
2018-02-01
Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.
Supervisory Control of Unmanned Vehicles
2010-04-01
than-ideal video quality (Chen et al., 2007; Chen and Thropp, 2007). Simpson et al. (2004) proposed using a spatial audio display to augment UAV...operator’s SA and discussed its utility for each of the three SA levels. They recommended that both visual and spatial audio information should be...presented concurrently. They also suggested that presenting the audio information spatially may enhance UAV operator’s sense of presence (i.e
ERIC Educational Resources Information Center
Obrecht, Dean H.
This report contrasts the results of a rigidly specified, pattern-oriented approach to learning Spanish with an approach that emphasizes the origination of sentences by the learner in direct response to stimuli. Pretesting and posttesting statistics are presented and conclusions are discussed. The experimental method, which required the student to…
Johnston, Sandra; Parker, Christina N; Fox, Amanda
2017-09-01
Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, p<0.02) were evident in the subscale of transferability of learning from simulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by Elsevier Ltd.
Zhang, Zhengyi; Zhang, Gaoyan; Zhang, Yuanyuan; Liu, Hong; Xu, Junhai; Liu, Baolin
2017-12-01
This study aimed to investigate the functional connectivity in the brain during the cross-modal integration of polyphonic characters in Chinese audio-visual sentences. The visual sentences were all semantically reasonable and the audible pronunciations of the polyphonic characters in corresponding sentences contexts varied in four conditions. To measure the functional connectivity, correlation, coherence and phase synchronization index (PSI) were used, and then multivariate pattern analysis was performed to detect the consensus functional connectivity patterns. These analyses were confined in the time windows of three event-related potential components of P200, N400 and late positive shift (LPS) to investigate the dynamic changes of the connectivity patterns at different cognitive stages. We found that when differentiating the polyphonic characters with abnormal pronunciations from that with the appreciate ones in audio-visual sentences, significant classification results were obtained based on the coherence in the time window of the P200 component, the correlation in the time window of the N400 component and the coherence and PSI in the time window the LPS component. Moreover, the spatial distributions in these time windows were also different, with the recruitment of frontal sites in the time window of the P200 component, the frontal-central-parietal regions in the time window of the N400 component and the central-parietal sites in the time window of the LPS component. These findings demonstrate that the functional interaction mechanisms are different at different stages of audio-visual integration of polyphonic characters.
Ganesh, Attigodu Chandrashekara; Berthommier, Frédéric; Schwartz, Jean-Luc
2016-01-01
We introduce "Audio-Visual Speech Scene Analysis" (AVSSA) as an extension of the two-stage Auditory Scene Analysis model towards audiovisual scenes made of mixtures of speakers. AVSSA assumes that a coherence index between the auditory and the visual input is computed prior to audiovisual fusion, enabling to determine whether the sensory inputs should be bound together. Previous experiments on the modulation of the McGurk effect by audiovisual coherent vs. incoherent contexts presented before the McGurk target have provided experimental evidence supporting AVSSA. Indeed, incoherent contexts appear to decrease the McGurk effect, suggesting that they produce lower audiovisual coherence hence less audiovisual fusion. The present experiments extend the AVSSA paradigm by creating contexts made of competing audiovisual sources and measuring their effect on McGurk targets. The competing audiovisual sources have respectively a high and a low audiovisual coherence (that is, large vs. small audiovisual comodulations in time). The first experiment involves contexts made of two auditory sources and one video source associated to either the first or the second audio source. It appears that the McGurk effect is smaller after the context made of the visual source associated to the auditory source with less audiovisual coherence. In the second experiment with the same stimuli, the participants are asked to attend to either one or the other source. The data show that the modulation of fusion depends on the attentional focus. Altogether, these two experiments shed light on audiovisual binding, the AVSSA process and the role of attention.
Influence of Immersive Human Scale Architectural Representation on Design Judgment
NASA Astrophysics Data System (ADS)
Elder, Rebecca L.
Unrealistic visual representation of architecture within our existing environments have lost all reference to the human senses. As a design tool, visual and auditory stimuli can be utilized to determine human's perception of design. This experiment renders varying building inputs within different sites, simulated with corresponding immersive visual and audio sensory cues. Introducing audio has been proven to influence the way a person perceives a space, yet most inhabitants rely strictly on their sense of vision to make design judgments. Though not as apparent, users prefer spaces that have a better quality of sound and comfort. Through a series of questions, we can begin to analyze whether a design is fit for both an acoustic and visual environment.
Aggressive Bimodal Communication in Domestic Dogs, Canis familiaris.
Déaux, Éloïse C; Clarke, Jennifer A; Charrier, Isabelle
2015-01-01
Evidence of animal multimodal signalling is widespread and compelling. Dogs' aggressive vocalisations (growls and barks) have been extensively studied, but without any consideration of the simultaneously produced visual displays. In this study we aimed to categorize dogs' bimodal aggressive signals according to the redundant/non-redundant classification framework. We presented dogs with unimodal (audio or visual) or bimodal (audio-visual) stimuli and measured their gazing and motor behaviours. Responses did not qualitatively differ between the bimodal and two unimodal contexts, indicating that acoustic and visual signals provide redundant information. We could not further classify the signal as 'equivalent' or 'enhancing' as we found evidence for both subcategories. We discuss our findings in relation to the complex signal framework, and propose several hypotheses for this signal's function.
Nirme, Jens; Haake, Magnus; Lyberg Åhlander, Viveka; Brännström, Jonas; Sahlén, Birgitta
2018-04-05
Seeing a speaker's face facilitates speech recognition, particularly under noisy conditions. Evidence for how it might affect comprehension of the content of the speech is more sparse. We investigated how children's listening comprehension is affected by multi-talker babble noise, with or without presentation of a digitally animated virtual speaker, and whether successful comprehension is related to performance on a test of executive functioning. We performed a mixed-design experiment with 55 (34 female) participants (8- to 9-year-olds), recruited from Swedish elementary schools. The children were presented with four different narratives, each in one of four conditions: audio-only presentation in a quiet setting, audio-only presentation in noisy setting, audio-visual presentation in a quiet setting, and audio-visual presentation in a noisy setting. After each narrative, the children answered questions on the content and rated their perceived listening effort. Finally, they performed a test of executive functioning. We found significantly fewer correct answers to explicit content questions after listening in noise. This negative effect was only mitigated to a marginally significant degree by audio-visual presentation. Strong executive function only predicted more correct answers in quiet settings. Altogether, our results are inconclusive regarding how seeing a virtual speaker affects listening comprehension. We discuss how methodological adjustments, including modifications to our virtual speaker, can be used to discriminate between possible explanations to our results and contribute to understanding the listening conditions children face in a typical classroom.
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... offerings. Television and Class A television broadcast stations may make these announcements visually or... multicast audio programming streams, in a manner that appropriately alerts its audience to the fact that it is listening to a digital audio broadcast. No other insertion between the station's call letters and...
Voice over: Audio-visual congruency and content recall in the gallery setting
Fairhurst, Merle T.; Scott, Minnie; Deroy, Ophelia
2017-01-01
Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues. PMID:28636667
Voice over: Audio-visual congruency and content recall in the gallery setting.
Fairhurst, Merle T; Scott, Minnie; Deroy, Ophelia
2017-01-01
Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.
The Sky in your Hands - From the planetarium to the classroom
NASA Astrophysics Data System (ADS)
Canas, L.; Borges, I.; Ortiz-Gil, A.
2013-09-01
"The sky in your hands" is a project created in 2009, during the International Year of Astronomy in Spain, with the goal to create an image of the Universe for the visually impaired audiences. Includes a planetarium show with an audio component and tactile semi - spheres where the public can touch constellations and other objects of the Universe. Following the spirit of the IYA2009, the authors of this project made all products available to everyone that wishes to use them in outreach activities and science education. From observation and analyses of several groups of students and teachers that visited "The sky in your hands" Portuguese adaptation in Lisbon Planetarium, our team concluded that much could be done in classroom with students to make their process of learning easier and more motivating. Additionally it was noticed that for some schools it was difficult to travel with students to visit the planetarium. With this experience in mind different resources and materials were adapted to be used in classroom. Through this adaptation all students including those visually impaired can build a simple tactile image of a constellation and, working in small groups, can use low cost, recycled materials to build these tactile models. Students can record a new audio file explaining the astronomical concepts of the model they have built and include the m in a story. The groups include visually impaired and non-visually impaired students, as different skills from different students complete each other in order to accomplish the task in a more successful way. Afterwards each group presents the work to their peers. With this poster we plan to share our experience with the community where the collaboration between informal science learning in science centers, museums or planetariums and formal learning in school improves science learning, inspires students and facilitates their understanding of the nature of science in general.
Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot
Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.
2014-01-01
Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350
Quick Response (QR) Codes for Audio Support in Foreign Language Learning
ERIC Educational Resources Information Center
Vigil, Kathleen Murray
2017-01-01
This study explored the potential benefits and barriers of using quick response (QR) codes as a means by which to provide audio materials to middle-school students learning Spanish as a foreign language. Eleven teachers of Spanish to middle-school students created transmedia materials containing QR codes linking to audio resources. Students…
Cortical Integration of Audio-Visual Information
Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.
2013-01-01
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442
Intervention Strategies in Counselor Supervision.
ERIC Educational Resources Information Center
West, John; Sonstegard, Manford
This paper contains a model for practicum supervision developed by Dr. Manford Sonstegard. The procedure allows the supervisor, student-counselor, client, and practicum class to participate in the session. Whereas one-way mirrors, audio tapes and audio-visual tapes allow for only delayed feedback from the supervisor, Dr. Sonstegard's approach…
Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms
Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe
2017-01-01
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Aggressive Bimodal Communication in Domestic Dogs, Canis familiaris
Déaux, Éloïse C.; Clarke, Jennifer A.; Charrier, Isabelle
2015-01-01
Evidence of animal multimodal signalling is widespread and compelling. Dogs’ aggressive vocalisations (growls and barks) have been extensively studied, but without any consideration of the simultaneously produced visual displays. In this study we aimed to categorize dogs’ bimodal aggressive signals according to the redundant/non-redundant classification framework. We presented dogs with unimodal (audio or visual) or bimodal (audio-visual) stimuli and measured their gazing and motor behaviours. Responses did not qualitatively differ between the bimodal and two unimodal contexts, indicating that acoustic and visual signals provide redundant information. We could not further classify the signal as ‘equivalent’ or ‘enhancing’ as we found evidence for both subcategories. We discuss our findings in relation to the complex signal framework, and propose several hypotheses for this signal’s function. PMID:26571266
The Use of Audio in Computer-Based Instruction.
ERIC Educational Resources Information Center
Koroghlanian, Carol M.; Sullivan, Howard J.
This study investigated the effects of audio and text density on the achievement, time-in-program, and attitudes of 134 undergraduates. Data concerning the subjects' preexisting computer skills and experience, as well as demographic information, were also collected. The instruction in visual design principles was delivered by computer and included…
Student Preferences for Online Lecture Formats: Does Prior Experience Matter?
ERIC Educational Resources Information Center
Drouin, Michelle; Hile, Rachel E.; Vartanian, Lesa R.; Webb, Janae
2013-01-01
We examined undergraduate students' quality ratings of and preferences for different types of online lecture formats. Students preferred richer online lecture formats that included both audio and visual components; however, there were no significant differences between students' ratings of PowerPoint lectures with "audio" of the…
Studies on a Spatialized Audio Interface for Sonar
2011-10-03
addition of spatialized audio to visual displays for sonar is much akin to the development of talking movies in the early days of cinema and can be...than using the brute-force approach. PCA is one among several techniques that share similarities with the computational architecture of a
The use of audio-visual methods in radiology and physics courses
NASA Astrophysics Data System (ADS)
Holmberg, Peter
1987-03-01
Today's medicine utilizes sophisticated equipment for radiological, biochemical and microbiological investigation procedures and analyses. Hence it is necessary that physicians have adequate scientific and technical knowledge of the apparatus they are using so that the equipment can be used in the most effective way. Partly this knowledge is obtained from science-orientated courses in the preclinical stage of the study program for medical students. To increase the motivation to study science-courses (medical physics) audio-visual methods are used to describe diagnostic and therapeutic procedures in the clinical routines.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training
Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134
Virtual environment display for a 3D audio room simulation
NASA Astrophysics Data System (ADS)
Chapin, William L.; Foster, Scott
1992-06-01
Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.
Righi, Giulia; Tenenbaum, Elena J; McCormick, Carolyn; Blossom, Megan; Amso, Dima; Sheinkopf, Stephen J
2018-04-01
Autism Spectrum Disorder (ASD) is often accompanied by deficits in speech and language processing. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to examine whether young children with ASD show reduced sensitivity to temporal asynchronies in a speech processing task when compared to typically developing controls, and to examine how this sensitivity might relate to language proficiency. Using automated eye tracking methods, we found that children with ASD failed to demonstrate sensitivity to asynchronies of 0.3s, 0.6s, or 1.0s between a video of a woman speaking and the corresponding audio track. In contrast, typically developing children who were language-matched to the ASD group, were sensitive to both 0.6s and 1.0s asynchronies. We also demonstrated that individual differences in sensitivity to audiovisual asynchronies and individual differences in orientation to relevant facial features were both correlated with scores on a standardized measure of language abilities. Results are discussed in the context of attention to visual language and audio-visual processing as potential precursors to language impairment in ASD. Autism Res 2018, 11: 645-653. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to explore whether children with ASD process audio-visual synchrony in ways comparable to their typically developing peers, and the relationship between preference for synchrony and language ability. Results showed that there are differences in attention to audiovisual synchrony between typically developing children and children with ASD. Preference for synchrony was related to the language abilities of children across groups. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.
Cultural Systems and Land Use Decision Making.
ERIC Educational Resources Information Center
Schaefer, Larry; Pressman, Rob
This material includes student guide sheets, reference material, and tape script for the audio-tutorial unit on Cultural Systems. An audio tape is used with the materials. The material is designed for use with Connecticut schools, but can be adapted to other localities. The materials in this unit consider components of cultural systems, land use…
Local Implementation and Land Use Decision Making.
ERIC Educational Resources Information Center
Garlasco, Chris; And Others
This material includes student guide sheets, reference material, and tape script for the audio-tutorial unit on Local Implementation. An audio tape is used with the materials. The material is designed for use with Connecticut schools, but can be adapted to other localities. The material in this unit emphasizes the role of planning and zoning in…
Audio-visual speech perception: a developmental ERP investigation
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues
NASA Astrophysics Data System (ADS)
Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.
2003-12-01
We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.
When Pictures Waste a Thousand Words: Analysis of the 2009 H1N1 Pandemic on Television News
Luth, Westerly; Jardine, Cindy; Bubela, Tania
2013-01-01
Objectives Effective communication by public health agencies during a pandemic promotes the adoption of recommended health behaviours. However, more information is not always the solution. Rather, attention must be paid to how information is communicated. Our study examines the television news, which combines video and audio content. We analyse (1) the content of television news about the H1N1 pandemic and vaccination campaign in Alberta, Canada; (2) the extent to which television news content conveyed key public health agency messages; (3) the extent of discrepancies in audio versus visual content. Methods We searched for “swine flu” and “H1N1” in local English news broadcasts from the CTV online video archive. We coded the audio and visual content of 47 news clips during the peak period of coverage from April to November 2009 and identified discrepancies between audio and visual content. Results The dominant themes on CTV news were the vaccination rollout, vaccine shortages, long line-ups (queues) at vaccination clinics and defensive responses by public health officials. There were discrepancies in the priority groups identified by the provincial health agency (Alberta Health and Wellness) and television news coverage as well as discrepancies between audio and visual content of news clips. Public health officials were presented in official settings rather than as public health practitioners. Conclusion The news footage did not match the main public health messages about risk levels and priority groups. Public health agencies lost control of their message as the media focused on failures in the rollout of the vaccination campaign. Spokespeople can enhance their local credibility by emphasizing their role as public health practitioners. Public health agencies need to learn from the H1N1 pandemic so that future television communications do not add to public confusion, demonstrate bureaucratic ineffectiveness and contribute to low vaccination rates. PMID:23691150
Jodice, Patrick G.R.; Garman, S.L.; Collopy, Michael W.
2001-01-01
Marbled Murrelets (Brachyramphus marmoratus) are threatened seabirds that nest in coastal old-growth coniferous forests throughout much of their breeding range. Currently, observer-based audio-visual surveys are conducted at inland forest sites during the breeding season primarily to determine nesting distribution and breeding status and are being used to estimate temporal or spatial trends in murrelet detections. Our goal was to assess the feasibility of using audio-visual survey data for such monitoring. We used an intensive field-based survey effort to record daily murrelet detections at seven survey stations in the Oregon Coast Range. We then used computer-aided resampling techniques to assess the effectiveness of twelve survey strategies with varying scheduling and a sampling intensity of 4-14 surveys per breeding season to estimate known means and SDs of murrelet detections. Most survey strategies we tested failed to provide estimates of detection means and SDs that were within A?20% of actual means and SDs. Estimates of daily detections were, however, frequently estimated to within A?50% of field data with sampling efforts of 14 days/breeding season. Additional resampling analyses with statistically generated detection data indicated that the temporal variability in detection data had a great effect on the reliability of the mean and SD estimates calculated from the twelve survey strategies, while the value of the mean had little effect. Effectiveness at estimating multi-year trends in detection data was similarly poor, indicating that audio-visual surveys might be reliably used to estimate annual declines in murrelet detections of the order of 50% per year.
Papadopoulos, Konstantinos; Koustriava, Eleni; Koukourikos, Panagiotis; Kartasidou, Lefkothea; Barouti, Marialena; Varveris, Asimis; Misiou, Marina; Zacharogeorga, Timoclia; Anastasiadis, Theocharis
2017-01-01
Disorientation and inability of wayfinding are phenomena with a great frequency for individuals with visual impairments during the process of travelling novel environments. Orientation and mobility aids could suggest important tools for the preparation of a more secure and cognitively mapped travelling. The aim of the present study was to examine if spatial knowledge structured after an individual with blindness had studied the map of an urban area that was delivered through a verbal description, an audio-tactile map or an audio-haptic map, could be used for detecting in the area specific points of interest. The effectiveness of the three aids with reference to each other was also examined. The results of the present study highlight the effectiveness of the audio-tactile and the audio-haptic maps as orientation and mobility aids, especially when these are compared to verbal descriptions.
33 CFR 127.201 - Sensing and alarm systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...
33 CFR 127.201 - Sensing and alarm systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...
33 CFR 127.201 - Sensing and alarm systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...
Finding the Correspondence of Audio-Visual Events by Object Manipulation
NASA Astrophysics Data System (ADS)
Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru
A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).
Lefebvre, M
1979-01-01
The present information production techniques are so inefficient that it is out of the question to generalize them. On the other hand audio-visual communication raises a major political problem, especially for developing countries. Audio-visual equipment has gone through adjustment phases; the example of the tape and cassette recorder is given: 2 technological improvements have completely modified its use; the transistors have allowed considerable reduction in volume and weight as well as the energy necessary; the invention of the cassette has simplified its use. Technological research is following 3 major directions: the production of equipment which consumes little energy; the improvement of electronic component production techniques (towards cheaper electronic components); finally, the designing of systems allowing to stock large quantities of information. The communication systems will probably make so much progress in the areas of technology and programming, that they will soon have very different uses than the present ones. The question is whether our civilizations will let themselves be dominated by these new systems, or whether they will succeed to turn them into progress tools.
Open-Loop Audio-Visual Stimulation (AVS): A Useful Tool for Management of Insomnia?
Tang, Hsin-Yi Jean; Riegel, Barbara; McCurry, Susan M; Vitiello, Michael V
2016-03-01
Audio Visual Stimulation (AVS), a form of neurofeedback, is a non-pharmacological intervention that has been used for both performance enhancement and symptom management. We review the history of AVS, its two sub-types (close- and open-loop), and discuss its clinical implications. We also describe a promising new application of AVS to improve sleep, and potentially decrease pain. AVS research can be traced back to the late 1800s. AVS's efficacy has been demonstrated for both performance enhancement and symptom management. Although AVS is commonly used in clinical settings, there is limited literature evaluating clinical outcomes and mechanisms of action. One of the challenges to AVS research is the lack of standardized terms, which makes systematic review and literature consolidation difficult. Future studies using AVS as an intervention should; (1) use operational definitions that are consistent with the existing literature, such as AVS, Audio-visual Entrainment, or Light and Sound Stimulation, (2) provide a clear rationale for the chosen training frequency modality, (3) use a randomized controlled design, and (4) follow the Consolidated Standards of Reporting Trials and/or related guidelines when disseminating results.
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
Mandarin Visual Speech Information
ERIC Educational Resources Information Center
Chen, Trevor H.
2010-01-01
While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…
7 CFR 1.168 - Procedure for hearing.
Code of Federal Regulations, 2010 CFR
2010-01-01
...-visual telecommunication, or personal attendance of any individual expected to attend the hearing and the... personal attendance of any individual expected to attend the hearing rather than by audio-visual...-visual telecommunication. (ii) Within 10 days after the Judge issues a notice stating the manner in which...
7 CFR 1.168 - Procedure for hearing.
Code of Federal Regulations, 2011 CFR
2011-01-01
...-visual telecommunication, or personal attendance of any individual expected to attend the hearing and the... personal attendance of any individual expected to attend the hearing rather than by audio-visual...-visual telecommunication. (ii) Within 10 days after the Judge issues a notice stating the manner in which...
Rouhani, R; Cronenberger, H; Stein, L; Hannum, W; Reed, A M; Wilhelm, C; Hsiao, H
1995-01-01
This paper describes the design, authoring, and development of interactive, computerized, multimedia clinical simulations in pediatric rheumatology/immunology and related musculoskeletal diseases, the development and implementation of a high speed information management system for their centralized storage and distribution, and analytical methods for evaluating the total system's educational impact on medical students and pediatric residents. An FDDI fiber optic network with client/server/host architecture is the core. The server houses digitized audio, still-image video clips and text files. A host station houses the DB2/2 database containing case-associated labels and information. Cases can be accessed from any workstation via a customized interface in AVA/2 written specifically for this application. OS/2 Presentation Manager controls, written in C, are incorporated into the interface. This interface allows SQL searches and retrievals of cases and case materials. In addition to providing user-directed clinical experiences, this centralized information management system provides designated faculty with the ability to add audio notes and visual pointers to image files. Users may browse through case materials, mark selected ones and download them for utilization in lectures or for editing and converting into 35mm slides.
Audio visual summary: Implementing PURPA in Mid-America
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The audio-visual presentation, Implementing PURPA in Mid-America, is a slide presentation designed to complement deliverable W-101-2, a booklet entitled Implementing PURPA in Mid-America: A Guide to the Public Utility Regulatory Policies Act. The presentation lasts 10 to 12 min and explains the major sections of PURPA, the rules promulgated by the Federal Energy Regulatory Commission to implement PURPA, and the implications of PURPA and its rules. It delineates the rights and responsibilities of citizens who want to sell electricity to utilities, explains the certification process, and discusses the rights and responsibilities of the utilities.
Secure videoconferencing equipment switching system and method
Hansen, Michael E [Livermore, CA
2009-01-13
A switching system and method are provided to facilitate use of videoconference facilities over a plurality of security levels. The system includes a switch coupled to a plurality of codecs and communication networks. Audio/Visual peripheral components are connected to the switch. The switch couples control and data signals between the Audio/Visual peripheral components and one but nor both of the plurality of codecs. The switch additionally couples communication networks of the appropriate security level to each of the codecs. In this manner, a videoconferencing facility is provided for use on both secure and non-secure networks.
Multisensory Motion Perception in 3–4 Month-Old Infants
Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara
2017-01-01
Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829
Unsupervised real-time speaker identification for daily movies
NASA Astrophysics Data System (ADS)
Li, Ying; Kuo, C.-C. Jay
2002-07-01
The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.
Linking Audio and Visual Information while Navigating in a Virtual Reality Kiosk Display
ERIC Educational Resources Information Center
Sullivan, Briana; Ware, Colin; Plumlee, Matthew
2006-01-01
3D interactive virtual reality museum exhibits should be easy to use, entertaining, and informative. If the interface is intuitive, it will allow the user more time to learn the educational content of the exhibit. This research deals with interface issues concerning activating audio descriptions of images in such exhibits while the user is…
The Redundancy Effect on Retention and Transfer for Individuals with High Symptoms of ADHD
ERIC Educational Resources Information Center
Brown, Victoria; Lewis, David; Toussaint, Mario
2016-01-01
The multimedia elements of text and audio need to be carefully integrated together to maximize the impact of those elements for learning in a multimedia environment. Redundancy information presented through audio and visual channels can inhibit learning for individuals diagnosed with ADHD, who may experience challenges in the processing of…
Creating Accessible Science Museums with User-Activated Environmental Audio Beacons (Ping!)
ERIC Educational Resources Information Center
Landau, Steven; Wiener, William; Naghshineh, Koorosh; Giusti, Ellen
2005-01-01
In 2003, Touch Graphics Company carried out research on a new invention that promises to improve accessibility to science museums for visitors who are visually impaired. The system, nicknamed Ping!, allows users to navigate an exhibit area, listen to audio descriptions, and interact with exhibits using a cell phone-based interface. The system…
Impact of Language on Development of Auditory-Visual Speech Perception
ERIC Educational Resources Information Center
Sekiyama, Kaoru; Burnham, Denis
2008-01-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…
Andreu-Sánchez, Celia; Martín-Pascual, Miguel Ángel; Gruart, Agnès; Delgado-García, José María
2017-01-01
While movie edition creates a discontinuity in audio-visual works for narrative and economy-of-storytelling reasons, eyeblink creates a discontinuity in visual perception for protective and cognitive reasons. We were interested in analyzing eyeblink rate linked to cinematographic edition styles. We created three video stimuli with different editing styles and analyzed spontaneous blink rate in participants (N = 40). We were also interested in looking for different perceptive patterns in blink rate related to media professionalization. For that, of our participants, half (n = 20) were media professionals, and the other half were not. According to our results, MTV editing style inhibits eyeblinks more than Hollywood style and one-shot style. More interestingly, we obtained differences in visual perception related to media professionalization: we found that media professionals inhibit eyeblink rate substantially compared with non-media professionals, in any style of audio-visual edition. PMID:28220882
Andreu-Sánchez, Celia; Martín-Pascual, Miguel Ángel; Gruart, Agnès; Delgado-García, José María
2017-02-21
While movie edition creates a discontinuity in audio-visual works for narrative and economy-of-storytelling reasons, eyeblink creates a discontinuity in visual perception for protective and cognitive reasons. We were interested in analyzing eyeblink rate linked to cinematographic edition styles. We created three video stimuli with different editing styles and analyzed spontaneous blink rate in participants (N = 40). We were also interested in looking for different perceptive patterns in blink rate related to media professionalization. For that, of our participants, half (n = 20) were media professionals, and the other half were not. According to our results, MTV editing style inhibits eyeblinks more than Hollywood style and one-shot style. More interestingly, we obtained differences in visual perception related to media professionalization: we found that media professionals inhibit eyeblink rate substantially compared with non-media professionals, in any style of audio-visual edition.
Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.
Stropahl, Maren; Debener, Stefan
2017-01-01
There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.
SU-E-J-192: Comparative Effect of Different Respiratory Motion Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, Y; Kadoya, N; Ito, K
Purpose: Irregular breathing can influence the outcome of four-dimensional computed tomography imaging for causing artifacts. Audio-visual biofeedback systems associated with patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches), representing simpler visual coaching techniques without guiding waveform are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching to reduce respiratory irregularities by comparing two respiratory management systems. Methods: We collected data from eleven healthy volunteers. Bar and wave models were used as audio-visual biofeedback systems. Abches consisted of a respiratorymore » indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. Results: All coaching techniques improved respiratory variation, compared to free breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86, and 0.98 ± 0.47 mm for free breathing, Abches, bar model, and wave model, respectively. Free breathing and wave model differed significantly (p < 0.05). Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18, and 0.17 ± 0.05 s for free breathing, Abches, bar model, and wave model, respectively. Free breathing and all coaching techniques differed significantly (p < 0.05). For variation in both displacement and period, wave model was superior to free breathing, bar model, and Abches. The average reduction in displacement and period RMSE compared with wave model were 27% and 47%, respectively. Conclusion: The efficacy of audio-visual biofeedback to reduce respiratory irregularity compared with Abches. Our results showed that audio-visual biofeedback combined with a wave model can potentially provide clinical benefits in respiratory management, although all techniques could reduce respiratory irregularities.« less
The effect of early visual deprivation on the neural bases of multisensory processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2015-06-01
Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Low Latency Audio Video: Potentials for Collaborative Music Making through Distance Learning
ERIC Educational Resources Information Center
Riley, Holly; MacLeod, Rebecca B.; Libera, Matthew
2016-01-01
The primary purpose of this study was to examine the potential of LOw LAtency (LOLA), a low latency audio visual technology designed to allow simultaneous music performance, as a distance learning tool for musical styles in which synchronous playing is an integral aspect of the learning process (e.g., jazz, folk styles). The secondary purpose was…
ERIC Educational Resources Information Center
Grossman, Ruth B
2015-01-01
We form first impressions of many traits based on very short interactions. This study examines whether typical adults judge children with high-functioning autism to be more socially awkward than their typically developing peers based on very brief exposure to still images, audio-visual, video-only, or audio-only information. We used video and…
NFL Films audio, video, and film production facilities
NASA Astrophysics Data System (ADS)
Berger, Russ; Schrag, Richard C.; Ridings, Jason J.
2003-04-01
The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.
The Function of Consciousness in Multisensory Integration
ERIC Educational Resources Information Center
Palmer, Terry D.; Ramsey, Ashley K.
2012-01-01
The function of consciousness was explored in two contexts of audio-visual speech, cross-modal visual attention guidance and McGurk cross-modal integration. Experiments 1, 2, and 3 utilized a novel cueing paradigm in which two different flash suppressed lip-streams cooccured with speech sounds matching one of these streams. A visual target was…
Construction and updating of event models in auditory event processing.
Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank
2018-02-01
Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Audio distribution and Monitoring Circuit
NASA Technical Reports Server (NTRS)
Kirkland, J. M.
1983-01-01
Versatile circuit accepts and distributes TV audio signals. Three-meter audio distribution and monitoring circuit provides flexibility in monitoring, mixing, and distributing audio inputs and outputs at various signal and impedance levels. Program material is simultaneously monitored on three channels, or single-channel version built to monitor transmitted or received signal levels, drive speakers, interface to building communications, and drive long-line circuits.
Audio-Tutorial Instruction in Medicine.
ERIC Educational Resources Information Center
Boyle, Gloria J.; Herrick, Merlyn C.
This progress report concerns an audio-tutorial approach used at the University of Missouri-Columbia School of Medicine. Instructional techniques such as slide-tape presentations, compressed speech audio tapes, computer-assisted instruction (CAI), motion pictures, television, microfiche, and graphic and printed materials have been implemented,…
Speed on the dance floor: Auditory and visual cues for musical tempo.
London, Justin; Burger, Birgitta; Thompson, Marc; Toiviainen, Petri
2016-02-01
Musical tempo is most strongly associated with the rate of the beat or "tactus," which may be defined as the most prominent rhythmic periodicity present in the music, typically in a range of 1.67-2 Hz. However, other factors such as rhythmic density, mean rhythmic inter-onset interval, metrical (accentual) structure, and rhythmic complexity can affect perceived tempo (Drake, Gros, & Penel, 1999; London, 2011 Drake, Gros, & Penel, 1999; London, 2011). Visual information can also give rise to a perceived beat/tempo (Iversen, et al., 2015), and auditory and visual temporal cues can interact and mutually influence each other (Soto-Faraco & Kingstone, 2004; Spence, 2015). A five-part experiment was performed to assess the integration of auditory and visual information in judgments of musical tempo. Participants rated the speed of six classic R&B songs on a seven point scale while observing an animated figure dancing to them. Participants were presented with original and time-stretched (±5%) versions of each song in audio-only, audio+video (A+V), and video-only conditions. In some videos the animations were of spontaneous movements to the different time-stretched versions of each song, and in other videos the animations were of "vigorous" versus "relaxed" interpretations of the same auditory stimulus. Two main results were observed. First, in all conditions with audio, even though participants were able to correctly rank the original vs. time-stretched versions of each song, a song-specific tempo-anchoring effect was observed, such that sped-up versions of slower songs were judged to be faster than slowed-down versions of faster songs, even when their objective beat rates were the same. Second, when viewing a vigorous dancing figure in the A+V condition, participants gave faster tempo ratings than from the audio alone or when viewing the same audio with a relaxed dancing figure. The implications of this illusory tempo percept for cross-modal sensory integration and working memory are discussed, and an "energistic" account of tempo perception is proposed. Copyright © 2015 Elsevier B.V. All rights reserved.
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.
ERIC Educational Resources Information Center
Diambra, Henry M.; And Others
VIDAC (Video Audio Compressed), a new technology based upon non-real-time transmission of audiovisual information via conventional television systems, has been invented by the Westinghouse Electric Corporation. This system permits time compression, during storage and transmission of the audio component of a still visual-narrative audio…
ERIC Educational Resources Information Center
Udo, John Patrick; Fels, Deborah I.
2009-01-01
Without access to audio description, individuals who are visually impaired (that is, are blind or have low vision) may be at a unique social disadvantage because they are unable to participate fully in a culture that is based on and heavily saturated by the enjoyment of audiovisual entertainments. Audio description was introduced as an adaptive…
Language Teaching with the Help of Multiple Methods. Collection d'"Etudes linguistiques," No. 21.
ERIC Educational Resources Information Center
Nivette, Jos, Ed.
This book presents articles on language teaching media. Among the titles are: (1) "Il Foreign Language Teaching e l'impiego degli audio-visivi" (Foreign Language Teaching and the Use of Audio Visual Methods) by D'Agostino, (2) "Le role et la nature de l'image dans l'enseignement programme de l'anglais, langue seconde" (The Role and Nature of the…
The Effects of Visual-Verbal Redundancy and Recaps on Television News Learning.
ERIC Educational Resources Information Center
Son, Jinok; Davie, William
A study examined the effects of visual-verbal redundancy and recaps on learning from television news. Two factors were used: redundancy between the visual and audio channels, and the presence or absence of a recap. Manipulation of these factors created four conditions: (1) redundant pictures and words plus recap, (2) redundant pictures and words…
Colorado Multicultural Resources for Arts Education: Dance, Music, Theatre, and Visual Art.
ERIC Educational Resources Information Center
Cassio, Charles J., Ed.
This Colorado resource guide is based on the premise that the arts (dance, music, theatre, and visual art) provide a natural arena for teaching multiculturalism to students of all ages. The guide provides information to Colorado schools about printed, disc, video, and audio tape visual prints, as well as about individuals and organizations that…
Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.
Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin
2016-04-01
There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei, E-mail: wlu@umm.edu; Neuner, Geoffrey A.; George, Rohini
2014-01-01
Purpose: To investigate whether coaching patients' breathing would improve the match between ITV{sub MIP} (internal target volume generated by contouring in the maximum intensity projection scan) and ITV{sub 10} (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer systemmore » (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV{sub 10} and ITV{sub MIP}. The match between ITV{sub MIP} and ITV{sub 10} was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV{sub MIP} improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV{sub MIP} and ITV{sub 10} over FB. On average, ITV{sub MIP} underestimated ITV{sub 10} by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV{sub MIP} did not correct for the mismatch between ITV{sub MIP} and ITV{sub 10}. Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV{sub MIP} and ITV{sub 10}. In general, ITV{sub MIP} should be limited to lung cancers, and modification of ITV{sub MIP} in each phase of the 4DCT data set is recommended.« less
Chung, W S; Lim, S M; Yoo, J H; Yoon, H
2013-01-01
Factors related to sexual arousal are different in men and women. The conditions for women to become aroused are more complex. However, the conventional audio-visual stimulation (AVS) materials used to evaluate sexual arousal are universal. In the present study, we investigated sexual differences in the response to different types of AVS by studying activated areas of the brain using functional magnetic resonance imaging (fMRI). fMRI was performed during two types of AVS in 20 healthy heterosexual volunteers (aged 20-28 years, 10 men and 10 women). The two AVS types were: (1) mood type, erotic video clips with a concrete story and (2) physical type, directly exposing sexual intercourse and genitalia. fMRI images were analyzed and compared for each stimulation with a Mann-Whitney U test, with statistical significance set at P<0.05. Men preferred the physical type of AVS to the mood type (mean arousal score 2.14 vs 1.86 in females) and women preferred the mood type (mean arousal score 2.14 vs 1.86 in males) (P<0.05). Degrees of activation in brain areas differed between genders and types of AVS for each gender. This should be considered when applying the AVS method to evaluate and diagnose female sexual dysfunction.
Multiplication: the use of 8 mm. film in community development.
Spurr, N
1966-01-01
Essentially, the motion picture is a means of communication. Over the last few years, there has been a great expansion in the use of audio-visual material in teaching and a growing market for audio-visual material. In the use of motion pictures the acceptance of 8 millimeter film as a valid communication medium has revolutionalized thinking regarding the motion picture film. No longer is sound film the only kind of respectable film. The single concept film -- or silent loop film -- has shown the value of the moving picture. There are many subjects which can appeal to a wide audience and so benefit from lowered costs resulting from mass multiplication. If this was not the case, the use of motion picture film in education could never have got off the ground. Preparation for use is the real key to the most successful use of film. Personal experience suggests that village audiences in developing countries may need to be shown a film more than once in order to enable them to derive full benefit from it. There must be time given to discuss its content and implications. If motion picture film is to be made the basis of discussion, there must be some limitation to the audience size. 8 millimeter film has this advantage in that there is a limit to the size of picture which can be usefully projected. Present day 8 millimeter equipment can record and play back magnetic sound. This is of great value to the community development worker. The community development worker is faced with problems not found within the framework of formal education, and making a film, a tape recording, or taking photographs of a community to show to itself may prove a powerful catalyst to community action.
Laboratory and in-flight experiments to evaluate 3-D audio display technology
NASA Technical Reports Server (NTRS)
Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel
1994-01-01
Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.
Television News Without Pictures?
ERIC Educational Resources Information Center
Graber, Doris A.
1987-01-01
Describes "gestalt" coding procedures that concentrate on the meanings conveyed by audio-visual messages rather than on coding individual pictorial elements shown in a news story. Discusses the totality of meaning that results from the interaction of verbal and visual story elements, external settings, and the decoding proclivities of…
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
Kawase, Saya; Hannah, Beverly; Wang, Yue
2014-09-01
This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.
75 FR 41093 - FM Table of Allotments, Maupin, Oregon
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-15
.... SUMMARY: The Audio Division grants the Petition for Reconsideration filed on behalf of Maupin Broadcasting... materials in accessible formats for people with disabilities (Braille, large print, electronic files, audio.... John A. Karousos, Assistant Chief, Audio Division, Media Bureau. [FR Doc. 2010-17226 Filed 7-14-10; 8...
The Use of Audio and Animation in Computer Based Instruction.
ERIC Educational Resources Information Center
Koroghlanian, Carol; Klein, James D.
This study investigated the effects of audio, animation, and spatial ability in a computer-based instructional program for biology. The program presented instructional material via test or audio with lean text and included eight instructional sequences presented either via static illustrations or animations. High school students enrolled in a…
Sounds of silence: How to animate virtual worlds with sound
NASA Technical Reports Server (NTRS)
Astheimer, Peter
1993-01-01
Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.
Latorre, Victor R.; Watwood, Donald B.
1994-01-01
A short-range, radio frequency (RF) transmitting-receiving system that provides both visual and audio warnings to the pilot of a helicopter or light aircraft of an up-coming power transmission line complex. Small, milliwatt-level narrowband transmitters, powered by the transmission line itself, are installed on top of selected transmission line support towers or within existing warning balls, and provide a continuous RF signal to approaching aircraft. The on-board receiver can be either a separate unit or a portion of the existing avionics, and can also share an existing antenna with another airborne system. Upon receipt of a warning signal, the receiver will trigger a visual and an audio alarm to alert the pilot to the potential power line hazard.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J
2013-12-01
Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
The visual management system of the Forest Service, USDA
Warren R. Bacon
1979-01-01
The National Forest Landscape Management Program began, as a formal program, at a Servicewide meeting in St. Louis in 1969 in response to growing agency and public concern for the visual resource. It is now an accepted part of National Forest management and is supported by a large and growing foundation of handbooks, research papers, and audio/visual programs. This...
Design guidelines for the use of audio cues in computer interfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sumikawa, D.A.; Blattner, M.M.; Joy, K.I.
1985-07-01
A logical next step in the evolution of the computer-user interface is the incorporation of sound thereby using our senses of ''hearing'' in our communication with the computer. This allows our visual and auditory capacities to work in unison leading to a more effective and efficient interpretation of information received from the computer than by sight alone. In this paper we examine earcons, which are audio cues, used in the computer-user interface to provide information and feedback to the user about computer entities (these include messages and functions, as well as states and labels). The material in this paper ismore » part of a larger study that recommends guidelines for the design and use of audio cues in the computer-user interface. The complete work examines the disciplines of music, psychology, communication theory, advertising, and psychoacoustics to discover how sound is utilized and analyzed in those areas. The resulting information is organized according to the theory of semiotics, the theory of signs, into the syntax, semantics, and pragmatics of communication by sound. Here we present design guidelines for the syntax of earcons. Earcons are constructed from motives, short sequences of notes with a specific rhythm and pitch, embellished by timbre, dynamics, and register. Compound earcons and family earcons are introduced. These are related motives that serve to identify a family of related cues. Examples of earcons are given.« less
Visual cues and listening effort: individual variability.
Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y
2011-10-01
To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.
Audio Spectrogram Representations for Processing with Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Wyse, L.
2017-05-01
One of the decisions that arise when designing a neural network for any application is how the data should be represented in order to be presented to, and possibly generated by, a neural network. For audio, the choice is less obvious than it seems to be for visual images, and a variety of representations have been used for different applications including the raw digitized sample stream, hand-crafted features, machine discovered features, MFCCs and variants that include deltas, and a variety of spectral representations. This paper reviews some of these representations and issues that arise, focusing particularly on spectrograms for generating audio using neural networks for style transfer.