Al-Sharhan, Jamal A.
A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…
Rigg, Robinson P.
An attempt is made to show the importance of modern audiovisual (AV) aids and techniques to management training. The first two chapters give the background to the present situation facing the training specialist. Chapter III considers the AV aids themselves in four main groups: graphic materials, display equipment which involves projection, and…
Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…
Presents information on a variety of audiovisual materials from government and nongovernment sources. Topics include aerodynamics and conditions of flight, airports, navigation, careers, history, medical factors, weather, films for classroom use, and others. (Author/SA)
Center for Applied Linguistics, Arlington, VA.
This bibliography lists audiovisual materials used in the teaching of English as a second language. The sections of the bibliography include: (1) Pictures, Charts, and Flash cards, (2) Flannel Aids, (3) Games and Puzzles, (4) Films, (5) Filmstrips and Transparencies, (6) Aural Aids, and (7) Miscellaneous Aids, including a classroom thermometer, a…
Moche, Dinah L.
Discusses the use of easily available audiovisual aids to teach a one semester course in astronomy and space physics to liberal arts students of both sexes at Queensborough Community College. Included is a list of teaching aids for use in astronomy instruction. (CC)
Eduplan Informa, 1971
This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…
Uys, P. G.
This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…
Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi
This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.
An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.
Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.
This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…
Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha
Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…
Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha
Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various concepts of the topic, while keeping in view Mayer's and Ellaway guidelines for multimedia presentation. A pre-post test study on subject knowledge was conducted for 100 students with the video shown as intervention. A retrospective pre study was conducted as a survey which inquired about students understanding of the key concepts of the topic and a feedback on our video was taken. Students performed significantly better in the post test (mean score 8.52 vs. 5.45 in pre-test), positively responded in the retrospective pre-test and gave a positive feedback for our video presentation. Well-designed multimedia tools can aid in cognitive processing and enhance working memory capacity as shown in our study. In times when "smart" device penetration is high, information and communication tools in medical education, which can act as essential aid and not as replacement for traditional curriculums, can be beneficial to the students. © 2015 by The International Union of Biochemistry and Molecular Biology, 44:241-245, 2016.
Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C.; Rishiq, Dania; Abrams, Harvey
Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population. PMID:28270763
Ju, Harang; Kim, Siyong; Read, Paul; Trifiletti, Daniel; Harrell, Andrew; Libby, Bruce; Kim, Taeho
In radiotherapy, only a few immobilization systems, such as open-face mask and head mold with a bite plate, are available for claustrophobic patients with a certain degree of discomfort. The purpose of this study was to develop a remote-controlled and self-contained audiovisual (AV)-aided interactive system with the iPad mini with Retina display for intrafractional motion management in brain/H&N (head and neck) radiotherapy for claustrophobic patients. The self-contained, AV-aided interactive system utilized two tablet computers: one for AV-aided interactive guidance for the subject and the other for remote control by an operator. The tablet for audiovisual guidance traced the motion of a colored marker using the built-in front-facing camera, and the remote control tablet at the control room used infrastructure Wi-Fi networks for real-time communication with the other tablet. In the evaluation, a programmed QUASAR motion phantom was used to test the temporal and positional accuracy and resolution. Position data were also obtained from ten healthy volunteers with and without guidance to evaluate the reduction of intrafractional head motion in simulations of a claustrophobic brain or H&N case. In the phantom study, the temporal and positional resolution was 24 Hz and 0.2 mm. In the volunteer study, the average superior-inferior and right-left displacement was reduced from 1.9 mm to 0.3 mm and from 2.2 mm to 0.2 mm with AV-aided interactive guidance, respectively. The superior-inferior and right-left positional drift was reduced from 0.5 mm/min to 0.1 mm/min and from 0.4 mm/min to 0.04 mm/min with audiovisual-aided interactive guidance. This study demonstrated a reduction in intrafractional head motion using a remote-controlled and self-contained AV-aided interactive system of iPad minis with Retina display, easily obtainable and cost-effective tablet computers. This approach can potentially streamline clinical flow for claustrophobic patients without a head mask and
Sahin, Mehmet; Sule, St.; Seçer, Y. E.
This study aims to find out the challenges encountered in the use of video as audio-visual material as a warm-up activity in aviation English course at high school level. This study is based on a qualitative study in which focus group interview is used as the data collection procedure. The participants of focus group are four instructors teaching…
This paper provides an inventory and summary of current and planned international information clearing house services in the field of population/family planning, worldwide. Special emphasis is placed on services relating to audio-visual aids, educational materials, and information/education/communication support, as these items and activities have…
Cummins, John; And Others
This handbook is part of a British series of publications written for part-time tutors, volunteers, organizers, and trainers in the adult continuing education and training sectors. It offers practical advice on audiovisual aids and educational technology for tutors and organizers. The first chapter discusses how one learns. Chapter 2 addresses how…
Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542
The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)
Reviewed is an eight module course in respiratory physiology that utilizes audiovisual cassettes and tapes. The topics include the lung, ventilation, blood flow, and breathing. It is rated excellent in content and quality. (SL)
Describes the availability of United States government-produced audiovisual materials and discusses two audiovisual clearinghouses--the National Audiovisual Center (NAC) and the National Library of Medicine (NLM). Finding aids made available by NAC, NLM, and other government agencies are mentioned. NAC and the U.S. Government Printing Office…
Department of Housing and Urban Development, Washington, DC. Office of Policy Development and Research.
This directory presents an annotated bibliography of non-print information resources dealing with solar energy. The document is divided by type of audio-visual medium, including: (1) Films, (2) Slides and Filmstrips, and (3) Videotapes. A fourth section provides addresses and telephone numbers of audiovisual aids sources, and lists the page…
Carr, William D.
One of the principal values of audiovisual materials is that they permit the teacher to depart from verbal and printed symbolism, and at the same time to provide a wider real or vicarious experience for pupils. This booklet is designed to aid the teacher in using audiovisual material effectively. It covers visual displays, non-projected materials,…
Kaur, Haramritpal; Singh, Gurpreet; Singh, Amandeep; Sharda, Gagandeep; Aggarwal, Shobha
Background and Aims: Perioperative stress is an often ignored commonly occurring phenomenon. Little or no prior knowledge of anesthesia techniques can increase this significantly. Patients awaiting surgery may experience high level of anxiety. Preoperative visit is an ideal time to educate patients about anesthesia and address these fears. The present study evaluates two different approaches, i.e., standard interview versus informative audiovisual presentation with standard interview on information gain (IG) and its impact on patient anxiety during preoperative visit. Settings and Design: This prospective, double-blind, randomized study was conducted in a Tertiary Care Teaching Hospital in rural India over 2 months. Materials and Methods: This prospective, double-blind, randomized study was carried out among 200 American Society of Anesthesiologist Grade I and II patients in the age group 18–65 years scheduled to undergo elective surgery under general anesthesia. Patients were allocated to either one of the two equal-sized groups, Group A and Group B. Baseline anxiety and information desire component was assessed using Amsterdam Preoperative Anxiety and Information Scale for both the groups. Group A patients received preanesthetic interview with the anesthesiologist and were reassessed. Group B patients were shown a short audiovisual presentation about operation theater and anesthesia procedure followed by preanesthetic interview and were also reassessed. In addition, patient satisfaction score (PSS) and IG was assessed at the end of preanesthetic visit using standard questionnaire. Statistical Analysis Used: Data were expressed as mean and standard deviation. Nonparametric tests such as Kruskal–Wallis, Mann–Whitney, and Wilcoxon signed rank tests, and Student's t-test and Chi-square test were used for statistical analysis. Results: Patient's IG was significantly more in Group B (5.43 ± 0.55) as compared to Group A (4.41 ± 0.922) (P < 0.001). There was
American Council on Education, Washington, DC. HEATH/Closer Look Resource Center.
The fact sheet presents a suggested evaluation framework for use in previewing audiovisual materials, a list of selected resources, and an annotated list of films which were shown at the AHSSPPE '83 Media Fair as part of the national conference of the Association on Handicapped Student Service Programs in Postsecondary Education. Evaluation…
Physiology Teacher, 1976
Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…
Bryce, C. F. A.; Stewart, A. M.
A brief review of the characteristics of computer assisted instruction and the attributes of audiovisual media introduces this report on a project designed to improve the effectiveness of computer assisted learning through the incorporation of audiovisual materials. A discussion of the implications of research findings on the design and layout of…
Wilds, Thomas, Comp.; And Others
Provided is a compilation of recently annotated audiovisual materials which present techniques, models, or other specific information that can aid in providing comprehensive services to the handicapped. Entries which include a brief description, name of distributor, technical information, and cost are presented alphabetically by title in eight…
Möttönen, Riikka; Sams, Mikko
Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.
Pula, Fred John
Interest in audiovisual aids in education has been increased by the shortage of classrooms and good teachers and by the modern predisposition toward learning by visual concepts. Effective utilization of audiovisual materials and equipment depends most importantly, on adequate preparation of the teacher in operating equipment and in coordinating…
Kernan, Margaret; And Others
Includes nine articles that discuss audiovisuals in junior and senior high school libraries. Highlights include skills that various media require and foster; teaching students how to make effective audiovisuals; film production; state media contests; library orientation videos; slide-tape shows; photographic skills; and the use of audiovisuals to…
The Associateship Diploma in Education is seen as an important means of promoting and encouraging the use of audiovisual aids in Nigerian primary schools. Objectives of audiovisual instruction, a course outline, and procedures for teaching the course are suggested, and use of aids in primary schools is surveyed. (Author/MLW)
... page: //medlineplus.gov/ency/article/000594.htm HIV/AIDS To use the sharing features on this page, ... immunodeficiency virus (HIV) is the virus that causes AIDS. When a person becomes infected with HIV, the ...
Describes an exhibition for the benefit of teachers of English in Arab Primary Schools, which was prepared by third-year students at the Teachers College for Arab Teachers. The exhibition included games, songs, audiovisual aids, crossword puzzles, vocabulary, spelling booklets, preposition aids, and worksheet and lesson planning aids. (SED)
Stockton Unified School District, CA.
A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…
This document consists of four separate handouts all related to the appraisal of audiovisual (AV) materials: "How to Work with an Appraiser of AV Media: A Convenient Check List for Clients and Their Advisors," helps a client prepare for an appraisal, explaining what is necessary before the appraisal, the appraisal process and its costs,…
National Inst. of Mental Health (DHEW), Rockville, MD.
Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…
PATTERSON, PIERCE E.; AND OTHERS
RECOMMENDED STANDARDS FOR AUDIOVISUAL EQUIPMENT WERE PRESENTED SEPARATELY FOR GRADES KINDERGARTEN THROUGH SIX, AND FOR JUNIOR AND SENIOR HIGH SCHOOLS. THE ELEMENTARY SCHOOL EQUIPMENT CONSIDERED WAS THE FOLLOWING--CLASSROOM LIGHT CONTROL, MOTION PICTURE PROJECTOR WITH MOBILE STAND AND SPARE REELS, COMBINATION 2 INCH X 2 INCH SLIDE AND FILMSTRIP…
Kenney, Brigitte L.
Describes major uses of film, television, and video in mental health field and discusses problems in selection, acquisition, cataloging, indexing, storage, transfer, care of tapes, patients' rights, and copyright. A sample patient consent form for media recording, borrower's evaluation sheet, sources of audiovisuals and reviews, and 35 references…
Informationszentrum fuer Fremdsprachenforschung, Marburg (West Germany).
This listing, updating the 1969 publication, cites commerically available language instruction programs having audiovisual components. Supplementary audiovisual aids are available only as components of total programs noted in this work. Organization is by language and commerical source. Indications for classroom applications and prices (in German…
Mohr, Peter, Comp.
This listing cites commercially available programs for foreign language instruction which have audiovisual components. Supplementary audiovisual aids without accompanying basic text materials are not included. Organization is by language and commercial source. Indications for classroom application and prices (in German currency) are provided. The…
Kronish, Sidney J.
The Audiovisual Materials Evaluation Committee prepared this report to guide elementary and secondary teachers in their selection of supplementary economic education audiovisual materials. It updates a 1969 publication by adding 107 items to the original guide. Materials included in this report: (1) contain elements of economic analysis--facts,…
Babin, Pierre, Ed.
A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…
MCKEONE, CHARLES J.
THIS COMPILATION OF INSTRUCTIONAL AIDS FOR USE IN AIR-CONDITIONING AND REFRIGERATION TRAINING PROGRAMS CONTAINS LISTS OF VISUAL AND AUDIOVISUAL TRAINING AIDS AND GUEST LECTURERS AVAILABLE FROM MEMBER COMPANIES OF THE AIR-CONDITIONING AND REFRIGERATION INSTITUTE AS AN INDUSTRY SERVICE TO SCHOOL OFFICIALS INTERESTED IN CONDUCTING SUCH PROGRAMS. THE…
Guy, Robin Frederick
This paper describes a number of different types of training aids currently employed in online training: non-interactive audiovisual presentations; interactive computer-based aids; partially interactive aids based on recorded searches; print-based materials; and kits. The advantages and disadvantages of each type of aid are noted, and a table…
Sayles, Ellen L.
A study was made to find out the average amount of time that teachers in South Bend, Indiana spent designing audiovisual aids and to determine their awareness of the availability of audiovisual production classes. A questionnaire was sent to 30% of the teachers of grades 1-6 asking the amount of time they normally spent producing audiovisual…
Over the last 10 years, French television's Institute of Audiovisual Communication (INA) has shifted from modernist to post-modernist practice in broadcasting in a series of innovative audiovisual magazine programs about communication, and in a series of longer "compilation" documentaries. The first of INA's audiovisual magazines,…
Educational Products Information Exchange Inst., Stony Brook, NY.
The preventive maintenance system for audiovisual equipment presented in this handbook is designed by specialists so that it can be used by nonspecialists in school sites. The report offers specific advice on saftey factors and also lists major problems that should not be handled by nonspecialists. Other aspects of a preventive maintenance system…
van Linden, Sabine; Vroomen, Jean
In order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on…
Minnesota State Dept. of Education, St. Paul. Div. of Instruction.
This list of audiovisual materials for environmental education was prepared by the State of Minnesota, Department of Education, Division of Instruction, to accompany the pilot curriculum in environmental education. The majority of the materials listed are available from the University of Minnesota, or from state or federal agencies. The…
American Library Association, Chicago, IL.
Chapter 12 of the Anglo-American Cataloging Rules has been revised to provide rules for works in the principal audiovisual media (motion pictures, filmstrips, videorecordings, slides, and transparencies) as well as instructional aids (charts, dioramas, flash cards, games, kits, microscope slides, models, and realia). The rules for main and added…
Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia
We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.
Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia
We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience—i.e., the exposure to a double phonological code during childhood—affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically “deaf” and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech. PMID:25374551
Missouri State Dept. of Education, Jefferson City.
THIS HANDBOOK WAS DESIGNED FOR USE BY SCHOOL ADMINISTRATORS IN DEVELOPING A TOTAL AUDIOVISUAL (AV) PROGRAM. ATTENTION IS GIVEN TO THE IMPORTANCE OF AUDIOVISUAL MEDIA TO EFFECTIVE INSTRUCTION, ADMINISTRATIVE PERSONNEL REQUIREMENTS FOR AN AV PROGRAM, BUDGETING FOR AV INSTRUCTION, PROPER UTILIZATION OF AV MATERIALS, SELECTION OF AV EQUIPMENT AND…
Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata
The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping.
Mann, Joe, Ed.; Henderson, Jim, Ed.
An annotated listing of a variety of audiovisual formats on content related to the social-rehabilitation process is provided. The materials in the listing were selected from a collection of over 200 audiovisual catalogs. The major portion of the materials has not been screened. The materials are classified alphabetically by the following subject…
Liu, Ming; Xu, Xun; Huang, Thomas S.
Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
Feng, Ting; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao
Audiovisual integration has been known to enhance perception; nevertheless, another fundamental audiovisual interaction, i.e. attention rivalry, has not been well investigated. This paper studied the attention rivalry under irrelevant audiovisual stimulation using event-related potential (ERP) and behavioral analysis, and tested the existence of a vision dominated rivalry model. Participants need respond to the target in a bi- or unimodal audiovisual stimulation paradigm. The enhanced amplitude of central P300 under visual target bimodal stimulus indicated that vision demanded more cognitive resources, and the significant amplitude of frontal P200 under bimodal stimulus with non-target auditory stimulus implied that the brain mostly restrained the process of the non-target auditory information. ERP results, together with the analysis of the behavioral data and the subtraction waves, indicated a vision dominated attention rivalry model involved in audiovisual interaction. Furthermore, the latencies of P200 and P300 components implied that audiovisual attention rivalry occurred within the first 300ms after stimulus onset, i.e. significant differences were found in P200 latencies among three target bimodal stimuli, while no difference existed in P300 latencies. Attention shifting and re-directing might be the cause of such early audiovisual rivalry.
Zampini, Massimiliano; Guest, Steve; Shore, David I; Spence, Charles
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.
Describes the development of an audiovisual training course in duck husbandry which consists of synchronized tapes and slides. The production of the materials, equipment needs, operations, cost, and advantages of the program are discussed. (BM)
Educational Foundation for Visual Aids, London (England).
Audiovisual aids for teaching about the geography of Europe, that may be bought or rented from suppliers in Britain, are listed in this 120-page catalog. Audiovisual materials available include films, filmstrips, slides, overhead projector transparencies, wallsheets, prints, records, tapes, and teaching kits. Each catalog entry describes the…
Holt, Rachael; Kirk, Karen; Pisoni, David; Burckhartzmeyer, Lisa; Lin, Anna
The Audiovisual Lexical Neighborhood Sentence Test (AVLNST), a new, recorded speech recognition test for children with sensory aids, was administered in multiple presentation modalities to children with normal hearing and vision. Each sentence consists of three key words whose lexical difficulty is controlled according to the Neighborhood Activation Model (NAM) of spoken word recognition. According to NAM, the recognition of spoken words is influenced by two lexical factors: the frequency of occurrence of individual words in a language, and how phonemically similar the target word is to other words in the listeners lexicon. These predictions are based on auditory similarity only, and thus do not take into account how visual information can influence the perception of speech. Data from the AVLNST, together with those from recorded audiovisual versions of isolated word recognition measures, the Lexical Neighborhood, and the Multisyllabic Lexical Neighborhood Tests, were used to examine the influence of visual information on speech perception in children. Further, the influence of top-down processing on speech recognition was examined by evaluating performance on the recognition of words in isolation versus words in sentences. [Work supported by the American Speech-Language-Hearing Foundation, the American Hearing Research Foundation, and the NIDCD, T32 DC00012 to Indiana University.
Association of Coll. and Research Libraries, Chicago, IL.
The purpose of these guidelines, prepared by the Audio-Visual Committee of the Association of College and Research Libraries, is to supply basic assistance to those academic libraries that will assume all or a major portion of an audio-visual program. They attempt to assist librarians to recognize and develop their audio-visual responsibilities…
Knotts, M A; Mueller, D
A new more comprehensive system for cataloging audiovisual materials is described. Existing audiovisual cataloging systems contain mostly descriptive information, publishers' or producers' summaries, and order information. This paper discusses the addition of measurable learning objectives to this standard information, thereby enabling the potential user to determine what can be learned from a particular audiovisual unit. The project included media in nursing only. A committee of faculty and students from the University of Alabama in Birmingham School of Nursing reviewed the materials. The system was field-tested at nursing schools throughout Alabama; the schools offered four different types of programs. The system and its sample product, the AVLOC catalog, were also evaluated by medical librarians, media specialists, and other nursing instructors throughout the United States. PMID:50106
McDaniel, Margaret, Comp.
Over one-hundred citations, the majority of which are current works dating from the seventies, are provided in this annotated bibliography focusing on energy. Entries include books, pamphlets, reports, magazine articles, bibliographies, newsletters, and curriculum materials, such as audiovisual aids, guides and units, and simulations which will be…
Hocking, Julia; Price, Cathy J.
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…
Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta
Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…
Mutchie, Kelly D.; And Others
A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)
Gulliford, Nancy L.
Developed by the Westinghouse Electric Corporation, Video Audio Compressed (VIDAC) is a compressed time, variable rate, still picture television system. This technology made it possible for a centralized library of audiovisual materials to be transmitted over a television channel in very short periods of time. In order to establish specifications…
Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa
The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…
Oates, Stanton C.
An audiovisual equipment manual provides both the means of learning how to operate equipment and information needed to adjust equipment that is not performing properly. The manual covers the basic principles of operation for filmstrip-slide projectors, motion picture projectors, opaque projectors, overhead projectors, portable screens, record…
Mayo, Kathleen; Rider, Sheila
Disabled persons, family members, organizations, and libraries are often looking for materials to help inform, educate, or challenge them regarding the issues surrounding disabilities. This directory of audiovisual materials available from the State Library of Florida includes materials that present ideas and personal experiences covering a range…
Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)
Ministry of Education, Tokyo (Japan).
This paper summarizes the findings of a national survey conducted for the Ministry of Education, Science, and Culture in 1986 to determine the kinds of audiovisual equipment available in Japanese schools, together with the rate of diffusion for the various types of equipment, the amount of teacher participation in training for their use, and the…
Huang, Thomas S.; Zeng, Zhihong
Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.
Suter, Emanuel; Waddell, Wendy H.
Defines attributes of quality in content, instructional design, technical production, and packaging of audiovisual materials used in the education of health professionals. Seven references are listed. (FM)
Martin, R R; Haroldson, S K
Unsophisticated raters, using 9-point interval scales, judged speech naturalness and stuttering severity of recorded stutterer and nonstutterer speech samples. Raters judged separately the audio-only and audiovisual presentations of each sample. For speech naturalness judgments of stutterer samples, raters invariably judged the audiovisual presentation more unnatural than the audio presentation of the same sample; but for the nonstutterer samples, there was no difference between audio and audiovisual naturalness ratings. Stuttering severity ratings did not differ significantly between audio and audiovisual presentations of the same samples. Rater reliability, interrater agreement, and intrarater agreement for speech naturalness judgments were assessed.
Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442
Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W
Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention.
Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when
Lewkowicz, David J.
Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…
Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…
Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).
Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2)…
This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…
California Univ., Los Angeles. Biomedical Library.
This manual offers information on a wide variety of health-related audiovisual materials (AVs) in many formats: video, motion picture, slide, filmstrip, audiocassette, transparencies, microfilm, and computer assisted instruction. Intended for individuals who are just learning about audiovisual materials and equipment management, the manual covers…
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.
Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.
Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…
Aleksandrov, Evgeniy P.
Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.
van Stapele, Peter, Ed.; Sutton, Clifford C., Ed.
The 15 articles in this special issue focus on learning about the audiovisual mass media and education, especially television and film, in relation to various pedagogical and didactical questions. Individual articles are: (1) "Audiovisual Mass Media for Education in Pakistan: Problems and Prospects" (Ahmed Noor Kahn); (2) "The Role of the…
Bautista Garcia-Vera, Antonio
We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…
Cortina, L. M.
Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.
Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A.
Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver. PMID:19609344
Chandrasekaran, Chandramouli; Trubanova, Andrea; Stillittano, Sébastien; Caplier, Alice; Ghazanfar, Asif A
Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.
California Foreign Language Teachers Association.
This document is a collection of articles on foreign language education from both state and national sources. The articles deal with trends in the field, resources for the foreign language teacher, creative student work and audiovisual teaching aids. The volume is divided into the following sections: (1) general language (including the articles…
De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.
Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918
Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134
This article contains a the script for a slide-tape presentation entitled Working Against AIDS, a presentation developed by the Brazil Family Planning Association (BEMFAM) which is designed to debunk common misconceptions about the disease. This audio-visual, which targets Brazilian workers, can be used during talks, seminars, and meetings. A discussion of the issues involved usually follows the presentation of Working Against AIDS. The presentation contains 30 illustrated slides (these are included in the article). The presentation begins by explaining that much of the information concerning AIDS is prejudicial and misleading. The next few slides point out some of the common misconceptions about AIDS, such as claims denying the existence of the disease, or suggestions that only homosexuals and prostitutes are at risk. The presentation then goes on to explain the ways in which the virus can and cannot be transmitted. Then it discusses how the virus destroys the body's natural defenses and explains the ensuing symptoms. Slides 14 and 15 point out that no cure yet exists for AIDS, making prevention essential. Slides 16-23 explain what actions are considered to be high risk and which ones do not entail risk. Noting that AIDS can be prevented, slide 24 says that the disease should not present an obstacle to spontaneous manifestations of human relations. The next slide explains that condoms should always be used when having sex with someone who could be infected with AIDS. Finally slides 26-30 demonstrate the proper way to use and dispose of a condom.
Frisch, Stefan A.; Nikjeh, Dee Adams
A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.
Peelle, Jonathan E; Sommers, Mitchell S
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration
Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.
Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309
... Health Info » Hearing, Ear Infections, and Deafness Hearing Aids On this page: What is a hearing aid? ... the ear through a speaker. How can hearing aids help? Hearing aids are primarily useful in improving ...
Henelius, Andreas; Jagadeesan, Sharman; Huotilainen, Minna
The successful recording of neurophysiologic signals, such as event-related potentials (ERPs) or event-related magnetic fields (ERFs), relies on precise information of stimulus presentation times. We have developed an accurate and flexible audiovisual sensor solution operating in real-time for on-line use in both auditory and visual ERP and ERF paradigms. The sensor functions independently of the used audio or video stimulus presentation tools or signal acquisition system. The sensor solution consists of two independent sensors; one for sound and one for light. The microcontroller-based audio sensor incorporates a novel approach to the detection of natural sounds such as multipart audio stimuli, using an adjustable dead time. This aids in producing exact markers for complex auditory stimuli and reduces the number of false detections. The analog photosensor circuit detects changes in light intensity on the screen and produces a marker for changes exceeding a threshold. The microcontroller software for the audio sensor is free and open source, allowing other researchers to customise the sensor for use in specific auditory ERP/ERF paradigms. The hardware schematics and software for the audiovisual sensor are freely available from the webpage of the authors' lab.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.
Garrett County, Maryland volunteered to act as a pre-overseas learning laboratory for AID (Agency for International Development) interns who practiced data collection and planning techniques with the help of local citizenry. (JC)
Keller, Arielle S.; Sekuler, Robert
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193
Spence, C; Driver, J
Two experiments examined any inhibition-of-return (IOR) effects from auditory cues and from preceding auditory targets upon reaction times (RTs) for detecting subsequent auditory targets. Auditory RT was delayed if the preceding auditory cue was on the same side as the target, but was unaffected by the location of the auditory target from the preceding trial, suggesting that response inhibition for the cue may have produced its effects. By contrast, visual detection RT was inhibited by the ipsilateral presentation of a visual target on the preceding trial. In a third experiment, targets could be unpredictably auditory or visual, and no peripheral cues intervened. Both auditory and visual detection RTs were now delayed following an ipsilateral versus contralateral target in either modality on the preceding trial, even when eye position was monitored to ensure central fixation throughout. These data suggest that auditory target-target IOR arises only when target modality is unpredictable. They also provide the first unequivocal evidence for cross-modal IOR, since, unlike other recent studies (e.g., Reuter-Lorenz, Jha, & Rosenquist, 1996; Tassinari & Berlucchi, 1995; Tassinari & Campara, 1996), the present cross-modal effects cannot be explained in terms of response inhibition for the cue. The results are discussed in relation to neurophysiological studies and audiovisual links in saccade programming.
Bezrukova, E Iu; Zatsepa, S A
The paper is devoted to the topical problems of using innovation, information and communication technologies (ICT) in the higher medical education system, including in postgraduate professional education. The paper shows the key principles for organizing an audiovisual technology-based educational process and gives numerous practical examples of the real use of ICT in the education of not only medical, but also other specialists and the results of studies of applying the current technical aids of innovation professional education. Since each area of manpower training has its specificity and unique goals, the authors propose the highly effective decisions to organize an educational process, which fully take into consideration of the specific features of professional education. These technologies substantially expand access to educational resources, which is of great importance for a strategy of continuing professional development.
Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Kincses, Zsigmond Tamás
Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction.
Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.
Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…
Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador
Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.
Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof
Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration.
Eg, Ragnhild; Behne, Dawn M.
In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738
... facebook share with twitter share with linkedin HIV/AIDS HIV, or human immunodeficiency virus, is the virus ... HIV/AIDS. Why Is the Study of HIV/AIDS a Priority for NIAID? Nearly 37 million people ...
... and Consumer Devices Consumer Products Hearing Aids Hearing Aids Share Tweet Linkedin Pin it More sharing options ... to restrict your daily activities. Properly fitted hearing aids and aural rehabilitation (techniques used to identify and ...
... Surgery? A Week of Healthy Breakfasts Shyness Hearing Aids KidsHealth > For Teens > Hearing Aids Print A A ... with certain types of hearing loss. How Hearing Aids Help So you went to audiologist and found ...
Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho
Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion
Space layout and work flow patterns in the Audiovisual Center at Purdue University were studied with respect to effective space utilization and the need for planning space requirements in relationship to the activities being performed. Space and work areas were reorganized to facilitate the flow of work and materials between areas, and equipment…
Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia
The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…
Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.
Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…
Lewkowicz, David J.; Flom, Ross
Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…
Purpose: It has recently been reported (e.g., V. van Wassenhove, K. W. Grant, & D. Poeppel, 2005) that audiovisual (AV) presented speech is associated with an N1/P2 auditory event-related potential (ERP) response that is lower in peak amplitude compared with the responses associated with auditory only (AO) speech. This effect was replicated.…
Pilling, Michael; Thomas, Sharon
Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…
Trzebiatowski, Gregory, Ed.
The terms appearing in this glossary have been specifically selected for use by educators from a larger text, which was prepared by the Commission on Definition and Terminology of the Department of Audiovisual Instruction of the National Education Association. Specialized areas covered in the glossary include audio reproduction, audiovisual…
Alker, Henry A.; And Others
Two studies are reported, each of which achieves personality change with both audiovisual self-confrontation (AVSC) and supportive, nondirective interviews. The first study used Ericksonian identity achievement as a dependent variable. Sixty-one male subjects were measured using Anne Constantinople's inventory. The results of this study…
Bidd, Donald; And Others
This overview of PRECIS indexing system use by the National Film Board of Canada covers reasons for its choice, challenge involved in subject analysis and indexing of audiovisual documents, the methodology and software used to process PRECIS records, the resulting catalog subject indexes, and user reaction. Twenty-one references are cited. (EJS)
National Committee for Audio-Visual Aids in Education, London (England).
The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…
Navarra, Jordi; Hartcher-O'Brien, Jessica; Piazza, Elise; Spence, Charles
The brain adapts to asynchronous audiovisual signals by reducing the subjective temporal lag between them. However, it is currently unclear which sensory signal (visual or auditory) shifts toward the other. According to the idea that the auditory system codes temporal information more precisely than the visual system, one should expect to find some temporal shift of vision toward audition (as in the temporal ventriloquism effect) as a result of adaptation to asynchronous audiovisual signals. Given that visual information gives a more exact estimate of the time of occurrence of distal events than auditory information (due to the fact that the time of arrival of visual information regarding an external event is always closer to the time at which this event occurred), the opposite result could also be expected. Here, we demonstrate that participants' speeded reaction times (RTs) to auditory (but, critically, not visual) stimuli are altered following adaptation to asynchronous audiovisual stimuli. After receiving “baseline” exposure to synchrony, participants were exposed either to auditory-lagging asynchrony (VA group) or to auditory-leading asynchrony (AV group). The results revealed that RTs to sounds became progressively faster (in the VA group) or slower (in the AV group) as participants' exposure to asynchrony increased, thus providing empirical evidence that speeded responses to sounds are influenced by exposure to audiovisual asynchrony. PMID:19458252
Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.
The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…
Magnan, Annie; Ecalle, Jean
This study tested the effectiveness of audio-visual training in the discrimination of the phonetic feature of voicing on the recognition of written words by young children deemed to at risk of dyslexia (experiment 1) as well as on dyslexic children's phonological skills (experiment 2). In addition, the third experiment studied the effectiveness of…
Lissit, Robert; And Others
At the direction of President Carter, a year-long study of government audiovisual programs was conducted out of the Office of Telecommunications Policy in the Executive Office of the President. The programs in 16 departments and independent agencies, and the departments of the Army, Navy, and Air Force have been reviewed to identify the scope of…
Sherry, Annette C.; Strojny, Allan
Discussion of the design of carts for moving audiovisual equipment in schools emphasizes safety factors. Topics addressed include poor design of top-heavy carts that has led to deaths and injuries; cart navigation; new manufacturing standards; and an alternative, safer cart design. (Contains 13 references.) (LRW)
Hitchens, Howard, Ed.
Designed to serve as a reference and source of ideas on the use of slides in combination with audiocassettes for presentation design, this book of readings from Audiovisual Instruction magazine includes three papers providing basic tips on putting together a presentation, five articles describing techniques for improving the visual images, five…
Bahrani, Taher; Sim, Tam Shu
The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…
Leighton, Lauren G.
An audio-visual course in Russian culture is given at Northern Illinois University. A collection of 4-5,000 color slides is the basis for the course, with lectures focussed on literature, philosophy, religion, politics, art and crafts. Acquisition, classification, storage and presentation of slides, and organization of lectures are discussed. (CHK)
Drake, Miriam A.; Baker, Martha
A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…
Cassileth, Barrie R.; And Others
Four audiovisual programs about cancer and cancer treatment were evaluated. Cancer patients, their families, and friends were asked to complete questionnaires before and after watching a program to determine the effects of the program on their knowledge of cancer, anxiety levels, and perceived ability to communicate with the staff. (Author/MLW)
Minnesota State Dept. of Education, St. Paul. Div. of Instruction.
This guide to resource materials on environmental education is in two sections: 1) Selected Bibliography of Printed Materials, compiled in April, 1970; and, 2) Audio-Visual materials, Films and Filmstrips, compiled in February, 1971. 99 book annotations are given with an indicator of elementary, junior or senior high school levels. Other book…
Peace Corps, Washington, DC. Information Collection and Exchange Div.
This packet contains three handouts on training theory and the use of audiovisual aids, as well as a section on materials and presentation techniques for use by community development workers concerned with exchanging information and working with the people in a community. The first handout, "Communication in Development," briefly…
Describes functions and possibility of initiating use of visual and audiovisual aids, including wall pictures, magnetic and flannelboard pictures, film strips, slides and films. Presents exercises and exercise types in connection with the individual media, using English examples. (Text is in German.) (IFS/WGA)
Nichols, Emily S; Grahn, Jessica A
Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration.
Bishop, Laura; Goebl, Werner
Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.
Yokosawa, Kazuhiko; Kanaya, Shoko
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Bishop, Laura; Goebl, Werner
Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819
The administration of drugs is often subject to error when performed by geriatric patients, resulting in reduced therapeutic effects. In particular, the correct administration technique for different dosage forms is a problem in elderly patients with limited audiovisual and ergonomic abilities. In addition to physician and carer intervention, community pharmacists can also contribute to solving outpatient problems of this kind at the time of issuing the drugs. This should preferably be done in collaboration with the respective medical practice. Some of the most common problems in drug administration, as well as the corresponding solutions offered by the pharmacist, are featured in the present paper. Inhalers, injection devices and ophthalmic solutions often cause difficulties for geriatric patients, as do even simple dosage forms such as drops. Pharmacy-based solutions to these problems include for instance: taking over the assembly of complex devices comprising multiple components, instructing patients in the proper use of additional dosing aids or adapting the administration technique to the patient's abilities.
Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately
... more in both quiet and noisy situations. Hearing aids help people who have hearing loss from damage ... your doctor. There are different kinds of hearing aids. They differ by size, their placement on or ...
AIDS (acquired immune deficiency syndrome) is caused by HIV (human immunodeficiency virus), and is a syndrome that ... life-threatening illnesses. There is no cure for AIDS, but treatment with antiviral medicine can suppress symptoms. ...
Alm, Magnus; Behne, Dawn
Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.
Crowley, C M
An audiovisual loan program developed by the library of the College of Medicine and Dentistry of New Jersey is described. This program, supported by an NLM grant, has circulated audiovisual software from CMDNJ to libraries since 1974. Project experiences and statistics reflect the great demand for audiovisuals by health science libraries and demonstrate that a borrowing system following the pattern of traditional interlibrary loan can operate effectively and efficiently to serve these needs.
Dick, Anthony Steven; Solodkin, Ana; Small, Steven L
Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the neurobiological substrate in the child compares to the adult is unknown. In particular, developmental differences in the network for audiovisual speech comprehension could manifest through the incorporation of additional brain regions, or through different patterns of effective connectivity. In the present study we used functional magnetic resonance imaging and structural equation modeling (SEM) to characterize the developmental changes in network interactions for audiovisual speech comprehension. The brain response was recorded while children 8- to 11-years-old and adults passively listened to stories under audiovisual (AV) and auditory-only (A) conditions. Results showed that in children and adults, AV comprehension activated the same fronto-temporo-parietal network of regions known for their contribution to speech production and perception. However, the SEM network analysis revealed age-related differences in the functional interactions among these regions. In particular, the influence of the posterior inferior frontal gyrus/ventral premotor cortex on supramarginal gyrus differed across age groups during AV, but not A speech. This functional pathway might be important for relating motor and sensory information used by the listener to identify speech sounds. Further, its development might reflect changes in the mechanisms that relate visual speech information to articulatory speech representations through experience producing and perceiving speech.
Ho, Cristy; Gray, Rob; Spence, Charles
Many studies now suggest that optimal multisensory integration sometimes occurs under conditions where auditory and visual stimuli are presented asynchronously (i.e. at asynchronies of 100 ms or more). Such observations lead to the suggestion that participants' speeded orienting responses might be enhanced following the presentation of asynchronous (as compared to synchronous) peripheral audiovisual spatial cues. Here, we report a series of three experiments designed to investigate this issue. Upon establishing the effectiveness of bimodal cuing over the best of its unimodal components (Experiment 1), participants had to make speeded head-turning or steering (wheel-turning) responses toward the cued direction (Experiment 2), or an incompatible response away from the cue (Experiment 3), in response to random peripheral audiovisual stimuli presented at stimulus onset asynchronies ranging from -100 to 100 ms. Race model inequality analysis of the results (Experiment 1) revealed different mechanisms underlying the observed multisensory facilitation of participants' head-turning versus steering responses. In Experiments 2 and 3, the synchronous presentation of the component auditory and visual cues gave rise to the largest facilitation of participants' response latencies. Intriguingly, when the participants had to subjectively judge the simultaneity of the audiovisual stimuli, the point of subjective simultaneity occurred when the auditory stimulus lagged behind the visual stimulus by 22 ms. Taken together, these results appear to suggest that the maximally beneficial behavioural (head and manual) orienting responses resulting from peripherally presented audiovisual stimuli occur when the component signals are presented in synchrony. These findings suggest that while the brain uses precise temporal synchrony in order to control its orienting responses, the system that the human brain uses to consciously judge synchrony appears to be less fine tuned.
... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or...-wide, clear captioning standards, procedures, and responsibilities. (e) Maintain current and...
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion.
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Denison, Rachel N.; Driver, Jon; Ruff, Christian C.
Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067
Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R.; Tadin, Duje
For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance. PMID:26509795
Simon, David M.; Noel, Jean-Paul; Wallace, Mark T.
Asynchronous arrival of multisensory information at the periphery is a ubiquitous property of signals in the natural environment due to differences in the propagation time of light and sound. Rapid adaptation to these asynchronies is crucial for the appropriate integration of these multisensory signals, which in turn is a fundamental neurobiological process in creating a coherent perceptual representation of our dynamic world. Indeed, multisensory temporal recalibration has been shown to occur at the single trial level, yet the mechanistic basis of this rapid adaptation is unknown. Here, we investigated the neural basis of rapid recalibration to audiovisual temporal asynchrony in human participants using a combination of psychophysics and electroencephalography (EEG). Consistent with previous reports, participant’s perception of audiovisual temporal synchrony on a given trial (t) was influenced by the temporal structure of stimuli on the previous trial (t−1). When examined physiologically, event related potentials (ERPs) were found to be modulated by the temporal structure of the previous trial, manifesting as late differences (>125 ms post second-stimulus onset) in central and parietal positivity on trials with large stimulus onset asynchronies (SOAs). These findings indicate that single trial adaptation to audiovisual temporal asynchrony is reflected in modulations of late evoked components that have previously been linked to stimulus evaluation and decision-making. PMID:28381993
Kaganovich, Natalya; Schumaker, Jennifer
Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.
Turi, Marco; Karaminis, Themelis; Pellicano, Elizabeth; Burr, David
Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512 ms auditory-lead to 512 ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication. PMID:26899367
Piwek, Lukasz; Pollick, Frank; Petrini, Karin
Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity. PMID:26005430
The effects of audiovisual communications on the emotional and psychological well-being of participants in the legal system have not been previously examined. Using as a framework for analysis what Slobogin (1996) calls internal balancing (of therapeutic versus antitherapeutic effects) and external balancing (of therapeutic jurisprudence [TJ] effects versus effects on other legal values), this brief paper discusses three examples that suggest the complexity of evaluating courtroom audiovisuals in TJ terms. In each instance, audiovisual displays that are admissible based on their arguable probative or explanatory value - day-in-the-life movies, victim impact videos, and computer simulations of litigated events - might well reduce stress and thus improve the psychological well-being of personal injury plaintiffs, survivors, and jurors, respectively. In each situation, however, other emotional and cognitive effects may prove antitherapeutic for the target or other participants, and/or may undermine other important values including outcome accuracy, fairness, and even the conception of the legal decision maker as a moral actor.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Kim, Kwang Jae; Han, Sung H; Yun, Myung Hwan; Kwahk, Jiyoung
A systematic modeling approach to describing, prescribing, and predicting usability of a product has been presented. Given the evaluation results of the usability dimension (UD) and the measurement of the product's design variables, referred to as the human interface elements (HIEs), the approach enables one to systematically assess the relationship between the UD and HIEs. The assessed relationship is called a usability model. Once built, such a usability model can relate, in a quantitative manner, the HIEs directly to the UDs, and thus can serve as an effective aid to designers by evaluating and predicting the usability of an existing or hypothetical product. A usability model for elegance of audiovisual consumer electronic products has been demonstrated.
Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko
Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…
Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.
Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…
Bahrani, Taher; Sim, Tam Shu
In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…
To enable a potential user to select the proper medium for his message, buy the appropriate equipment, and take proper care of it, this guide provides an accumulation of questions, answers, and tips on getting the most out of audiovisual buying decisions. Common questions about audiovisual equipment are discussed, along with more detailed…
Hsia, H. J.
In an attempt to ascertain the facilitating functions of audiovisual between-channel redundancy in information processing, a series of audiovisual experiments alternating auditory and visual as the dominant and redundant channels were conducted. As predicted, results generally supported the between-channel redundancy when input (stimulus) was…
Kim, Yong-Jin; Chang, Nam-Kee
Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…
Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle
Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…
Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…
Stolovitch, Harold D.; Bordeleau, Pierre
This paper contains a description of the cross-over type of experimental design as well as a case study of its use in field testing audiovisual materials related to teaching handicapped children. Increased efficiency is an advantage of the cross-over design, while difficulty in selecting similar format audiovisual materials for field testing is a…
Krahmer, Emiel; Swerts, Marc
We describe two experiments on signaling and detecting uncertainty in audiovisual speech by adults and children. In the first study, utterances from adult speakers and child speakers (aged 7-8) were elicited and annotated with a set of six audiovisual features. It was found that when adult speakers were uncertain they were more likely to produce…
KELLERHOUSE, KENNETH; AND OTHERS
APPROXIMATELY 25 SOURCES OF AUDIOVISUAL MATERIALS PERTAINING TO THE IROQUOIS AND OTHER NORTHEASTERN AMERICAN INDIAN TRIBES ARE LISTED ACCORDING TO TYPE OF AUDIOVISUAL MEDIUM. AMONG THE LESS-COMMON MEDIA ARE RECORDINGS OF IROQUOIS MUSIC AND DO-IT-YOURSELF REPRODUCTIONS OF IROQUOIS ARTIFACTS. PRICES ARE GIVEN WHERE APPLICABLE. (BR)
National Education Association, Washington, DC.
Intended to inform school board administrators and teachers of the current (1958) thinking on audio-visual instruction for use in planning new buildings, purchasing equipment, and planning instruction. Attention is given the problem of overcoming obstacles to the incorporation of audio-visual materials into the curriculum. Discussion includes--(1)…
... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or video media, from accidental or deliberate alteration or erasure. (c) If different versions of audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep...
... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or video media, from accidental or deliberate alteration or erasure. (c) If different versions of audiovisual productions (e.g., short and long versions or foreign-language versions) are prepared, keep...
Halas, John; Martin-Harris, Roy
Intended for use by persons in developing countries responsible for initiating or expanding the use of audiovisual facilities and techniques in industry, this manual is designed for those who have limited background in audiovisuals but need detailed information about how certain techniques may be employed in an economical, efficient way. Part one,…
Ely, Donald P.
Because no published glossary of audiovisual terms has yet gained international currency, there is a need to: (1) explore international acceptance of a list of audiovisual terms and definitions; (2) review current efforts to do so; (3) propose criteria for acceptable terms and definitions; and (4) recommend procedures for acceptance of…
Brooke, Martha L.; And Others
In 1973, the National Medical Audiovisual Center undertook the production of several audiovisual teaching units, each addressing a single-concept, using a team approach. The production team on the unit "Left Ventricle Catheterization" were a physiologist acting as content specialist, an artist and film producer as production specialist,…
Finn, James D.; Weintraub, Royd
The Medical Information Project (MIP) purpose to select the right type of audiovisual equipment for communicating new medical information to general practitioners of medicine was hampered by numerous difficulties. There is a lack of uniformity and standardization in audiovisual equipment that amounts to chaos. There is no evaluative literature on…
Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel
The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No
Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo
The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305
Petrini, Karin; Dahl, Sofia; Rocchesso, Davide; Waadeland, Carl Haakon; Avanzini, Federico; Puce, Aina; Pollick, Frank E
We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040
The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD). In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance) were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects’ closest distance biased judgments toward “non-overlapping,” and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments) was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift). These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies. PMID:27111759
Yamamoto, Shinya; Miyazaki, Makoto; Iwano, Takayuki; Kitazawa, Shigeru
After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation). In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration). We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone) in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.
The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD). In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance) were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects' closest distance biased judgments toward "non-overlapping," and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments) was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift). These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies.
Pitt, William D.
The project consisted of making a multi-level teaching film titled "Rocks and Minerals of the Ouachita Mountains," which runs for 25 minutes and is in color. The film was designed to be interesting to earth science students from junior high to college, and consists of dialogue combined with motion pictures of charts, sequential diagrams, outcrops,…
underground movement--filmed during World War II and never before seen. D-16. Part 7. Suicide Run to Murmansk. The story of a massacred convoy on the North...ruthless, bullying , and cajoling. Bismarck’s role as the engineer of the events that achieved the unification of Germany is charted. D-3. 3. BRADLEY, OMAR...took Berlin on the day that Hitler com- nitted suicide in a deserted bunker. D-7. (SA VPIN 82)89). p CHAPTER FOUR Unit Histories 1. U.S. ARMIES. Iterms
Gammisch, Sue, Comp.
This catalog contains an inventory of 16mm films, filmstrips, film loops, slide programs, records, and publications about the marine sciences and sea life that are available from VIMS/Sea Grant Marine Education Center; information on the borrowing of the AV materials is included, as well as prices for books and leaflets. The entries are listed…
... hair cells (outer and inner rows). When the vibrations move through this fluid, the tiny outer hair ... ear to the brain. Hearing aids intensify sound vibrations that the damaged outer hair cells have trouble ...
Barnard, W. Robert, Ed.
Provides evaluations of several aids for teaching chemistry. Included are The Use of Chemical Abstracts, Practical Technical Writing, Infrared Spectroscopy Programs, and a film titled "You Can't Go Back." (RH)
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.
Silva, Carlos César; Mendonça, Catarina; Mouta, Sandra; Silva, Rosa; Campos, José Creissac; Santos, Jorge
Background Due to their different propagation times, visual and auditory signals from external events arrive at the human sensory receptors with a disparate delay. This delay consistently varies with distance, but, despite such variability, most events are perceived as synchronic. There is, however, contradictory data and claims regarding the existence of compensatory mechanisms for distance in simultaneity judgments. Principal Findings In this paper we have used familiar audiovisual events – a visual walker and footstep sounds – and manipulated the number of depth cues. In a simultaneity judgment task we presented a large range of stimulus onset asynchronies corresponding to distances of up to 35 meters. We found an effect of distance over the simultaneity estimates, with greater distances requiring larger stimulus onset asynchronies, and vision always leading. This effect was stronger when both visual and auditory cues were present but was interestingly not found when depth cues were impoverished. Significance These findings reveal that there should be an internal mechanism to compensate for audiovisual delays, which critically depends on the depth information available. PMID:24244617
Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon
The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual “bisensory” stimuli. Three metrics were measured—onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952
Ionescu, Bogdan; Seyerlehner, Klaus; Rasche, Christoph; Vertan, Constantin; Lambert, Patrick
We propose an audio-visual approach to video genre classification using content descriptors that exploit audio, color, temporal, and contour information. Audio information is extracted at block-level, which has the advantage of capturing local temporal information. At the temporal structure level, we consider action content in relation to human perception. Color perception is quantified using statistics of color distribution, elementary hues, color properties, and relationships between colors. Further, we compute statistics of contour geometry and relationships. The main contribution of our work lies in harnessing the descriptive power of the combination of these descriptors in genre classification. Validation was carried out on over 91 h of video footage encompassing 7 common video genres, yielding average precision and recall ratios of 87% to 100% and 77% to 100%, respectively, and an overall average correct classification of up to 97%. Also, experimental comparison as part of the MediaEval 2011 benchmarking campaign demonstrated the efficiency of the proposed audio-visual descriptors over other existing approaches. Finally, we discuss a 3-D video browsing platform that displays movies using feature-based coordinates and thus regroups them according to genre.
Cook, Laura A; Van Valkenburg, David L; Badcock, David R
The ability to make accurate audiovisual synchrony judgments is affected by the "complexity" of the stimuli: We are much better at making judgments when matching single beeps or flashes as opposed to video recordings of speech or music. In the present study, we investigated whether the predictability of sequences affects whether participants report that auditory and visual sequences appear to be temporally coincident. When we reduced their ability to predict both the next pitch in the sequence and the temporal pattern, we found that participants were increasingly likely to report that the audiovisual sequences were synchronous. However, when we manipulated pitch and temporal predictability independently, the same effect did not occur. By altering the temporal density (items per second) of the sequences, we further determined that the predictability effect occurred only in temporally dense sequences: If the sequences were slow, participants' responses did not change as a function of predictability. We propose that reduced predictability affects synchrony judgments by reducing the effective pitch and temporal acuity in perception of the sequences.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder.
Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo
Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014
Altieri, Nicholas; Wenger, Michael J.
Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. PMID:24058358
Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong
Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.
Yin, Qinqing; Qiu, Jiang; Zhang, Qinglin; Wen, Xiaohui
This study used event-related potentials (ERPs) to investigate the electrophysiological correlates of cognitive conflict in audiovisual integration during an audiovisual task. ERP analyses revealed: (i) the anterior N1 and P1 were elicited in both matched and mismatched conditions and (ii) audiovisual mismatched answers elicited a more negative ERP deflection at 490 ms (N490) than matched answers. Dipole analysis of the difference wave (mismatched minus matched) localized the generator of the N490 to the posterior cingulate cortex, which may be involved in the control and modulation of conflict processing of Chinese characters when visual and auditory information is mismatched.
Resources - HIV/AIDS ... The following organizations are good resources for information on AIDS : AIDS.gov -- www.aids.gov AIDS Info -- aidsinfo.nih.gov The Henry J. Kaiser Family Foundation -- www. ...
... Devices Consumer Products Hearing Aids Types of Hearing Aids Share Tweet Linkedin Pin it More sharing options ... some features for hearing aids? What are hearing aids? Hearing aids are sound-amplifying devices designed to ...
Guiraud, Jeanne A; Tomalski, Przemyslaw; Kushnerenko, Elena; Ribeiro, Helena; Davies, Kim; Charman, Tony; Elsabbagh, Mayada; Johnson, Mark H
The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism.
Rapela, Joaquin; Gramann, Klaus; Westerfield, Marissa; Townsend, Jeanne; Makeig, Scott
Selective attention contributes to perceptual efficiency by modulating cortical activity according to task demands. The majority of attentional research has focused on the effects of attention to a single modality, and little is known about the role of attention in multimodal sensory processing. Here we employ a novel experimental design to examine the electrophysiological basis of audio-visual attention shifting. We use electroencephalography (EEG) to study differences in brain dynamics between quickly shifting attention between modalities and focusing attention on a single modality for extended periods of time. We also address interactions between attentional effects generated by the attention-shifting cue and those generated by subsequent stimuli. The conclusions from these examinations address key issues in attentional research, including the supramodal theory of attention, or the role of attention in foveal vision. The experimental design and analysis methods used here may suggest new directions in the study of the physiological basis of attention.
van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip-read information that disambiguates noise-masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio-visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.
Lewkowicz, David J; Flom, Ross
Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked whether the voice and face went together (Experiment 1) or whether the desynchronized videos differed from the synchronized one (Experiment 2). Four-year-olds detected the 666-ms asynchrony, 5-year-olds detected the 666- and 500-ms asynchrony, and 6-year-olds detected all asynchronies. These results show that the A-V temporal binding window narrows slowly during early childhood and that it is still wider at 6 years of age than in older children and adults.
Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G; Innes-Brown, Hamish; Shivdasani, Mohit N; Paolini, Antonio G
The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-, 22-, 12-, and 9-dB across the different age groups were compared. Multisensory facilitation was greater in adults than in children, although performance for all age groups was affected by the presence of background noise. It is posited that changes in multisensory facilitation with increased auditory noise may be due to changes in attention bias.
Martin, M. S.; Martin, F.; Lambert, R.
The role of sensory simulation in restrained rats was investigated. Both mixed audio-visual and pure sound stimuli, ineffective in themselves, were found to cause a significant increase in the incidence of restraint ulcers in the Wistar Rat.
de Groot, Jasper H B; Semin, Gün R; Smeets, Monique A M
Recent evidence suggests that humans can become fearful after exposure to olfactory fear signals, yet these studies have reported the effects of fear chemosignals without examining emotion-relevant input from traditional communication modalities (i.e., vision, audition). The question that we pursued here was therefore: How significant is an olfactory fear signal in the broader context of audiovisual input that either confirms or contradicts olfactory information? To test this, we manipulated olfactory (fear, no fear) and audiovisual (fear, no fear) information and demonstrated that olfactory fear signals were as potent as audiovisual fear signals in eliciting a fearful facial expression. Irrespective of confirmatory or contradictory audiovisual information, olfactory fear signals produced by senders induced fear in receivers outside of conscious access. These findings run counter to traditional views that emotions are communicated exclusively via visual and linguistic channels.
Mathur, J. C.
An Indian adult educator discusses the value of "pleasure-oriented" audiovisual adult education, the use of both commercial and subsidized films, television, and radio for their educational potential. He notes several production needs and techniques. (MF)
...) Photographic film and prints. The requirements in this paragraph apply to permanent, long-term temporary, and unscheduled audiovisual records. (1) General guidance. Keep all film in cold storage following guidance by...
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
Science Activities: Classroom Projects and Curriculum Ideas, 2007
This article describes 6 aids for science instruction, including (1) the use of fudge to represent lava; (2) the "Living by Chemistry" program, designed to make high school chemistry more accessible to a diverse pool of students without sacrificing content; (3) NOAA and NSTA's online coral reef teaching tool, a new web-based "science toolbox" for…
Martin, Joyce; Looney, Era
Designed for use in a self-paced, open-entry/open-exit vocational training program for a floriculture aide, this program guide is one of six for teachers of adult women offenders from a correctional institution. Module topic outlines and sample lesson plans are presented on eleven topics: occupational opportunities in the retail florist industry;…
Schwartz, Jean-Luc; Berthommier, Frédéric; Savariaux, Christophe
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception.
Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard
In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.
Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning
The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing.
... Enter ZIP code or city Follow Act Against AIDS Act Against AIDS @talkHIV Act Against AIDS Get Email Updates on AAA Anonymous Feedback HIV/AIDS Media Infographics Syndicated Content Podcasts Slide Sets HIV/ ...
Lee, HweeLing; Noppeney, Uta
Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.
Yambe, Tomoyuki; Yoshizawa, Makoto; Fukudo, Shin; Fukuda, Hiroshi; Kawashima, Ryuta; Shizuka, Kazuhiko; Nanka, Shunsuke; Tanaka, Akira; Abe, Ken-ichi; Shouji, Tomonori; Hongo, Michio; Tabayashi, Kouichi; Nitta, Shin-ichi
pathophysiological reaction to the audiovisual stimulations. As for the photo sensitive epilepsy, it was reported to be only 5-10% for all patients. Therefore, 90% or more of the cause could not be determined in patients who started a morbid response. The results in this study suggest that the autonomic function was connected to the mental tendency of the objects. By examining such directivity, it is expected that subjects, which show morbid reaction to an audiovisual stimulation, can be screened beforehand.
Jongbloed, Harry J. L.
As the fourth part of a comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded study reports on the African countries of Cameroun, Republic of Central Africa, Dahomey, Gabon, Ghana, Kenya, Libya, Mali, Nigeria, Rwanda, Senegal, Swaziland, Tunisia, Upper Volta and Zambia. Information…
National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.
The first edition of the National Audiovisual Center sales catalog (LI 003875) is updated by this supplement. Changes in price and order number as well as deletions from the 1969 edition, are noted in this 1971 version. Purchase and rental information for the sound films and silent filmstrips is provided. The broad subject categories are:…
George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J. . E-mail: email@example.com
Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.
Short, R V
This article reviews a peer group Acquired Immunodeficiency Syndrome (AIDS) educational program at a university in Australia. Studies in the US have shown that most adolescents, although sexually active, do not believe they are likely to become infected with the Human Immunodeficiency Virus, and therefore do not attempt to modify their sexual behavior. A 1st step in educating students is to introduce them to condoms and impress upon them the fact that condoms should be used at the beginning of all sexual relationships, whether homosexual or heterosexual. In this program 3rd year medical students were targeted, as they are effective communicators and disseminators of information to the rest of the student body. After class members blow up condoms, giving them a chance to handle various brands and observe the varying degrees of strength, statistical evidence about the contraceptive failure rate of condoms (0.6-14.7 per 100 women-years) is discussed. Spermicides, such as nonoxynol-9 used in conjunction with condoms, are also discussed, as are condoms for women, packaging and marketing of condoms, including those made from latex and from the caecum of sheep, the latter condoms being of questionable effectiveness in preventing transmission of the virus. The care of terminal AIDS cases and current global and national statistics on AIDS are presented. The program also includes cash prizes for the best student essays on condom use, the distribution of condoms, condom key rings and T-shirts, and a student-run safe sex stand during orientation week. All of these activities are intended to involve students and attract the interest of the undergraduate community. Questionnaires administered to students at the end of the course revealed that the lectures were received favorably. Questionnaires administered to new medical and English students attending orientation week revealed that 72% of students thought the stand was a good idea and 81% and 83%, respectively found it
Lalonde, Kaylah; Holt, Rachael Frush
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Lalonde, Kaylah; Holt, Rachael Frush
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Girin, L; Schwartz, J L; Feng, G
A key problem for telecommunication or human-machine communication systems concerns speech enhancement in noise. In this domain, a certain number of techniques exist, all of them based on an acoustic-only approach--that is, the processing of the audio corrupted signal using audio information (from the corrupted signal only or additive audio information). In this paper, an audio-visual approach to the problem is considered, since it has been demonstrated in several studies that viewing the speaker's face improves message intelligibility, especially in noisy environments. A speech enhancement prototype system that takes advantage of visual inputs is developed. A filtering process approach is proposed that uses enhancement filters estimated with the help of lip shape information. The estimation process is based on linear regression or simple neural networks using a training corpus. A set of experiments assessed by Gaussian classification and perceptual tests demonstrates that it is indeed possible to enhance simple stimuli (vowel-plosive-vowel sequences) embedded in white Gaussian noise.
Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.
Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.
Scannella, Sébastien; Causse, Mickaël; Chauveau, Nicolas; Pastor, Josette; Dehais, Frédéric
Auditory alarm misperception is one of the critical events that lead aircraft pilots to an erroneous flying decision. The rarity of these alarms associated with their possible unreliability may play a role in this misperception. In order to investigate this hypothesis, we manipulated both audiovisual conflict and sound rarity in a simplified landing task. Behavioral data and event related potentials (ERPs) of thirteen healthy participants were analyzed. We found that the presentation of a rare auditory signal (i.e., an alarm), incongruent with visual information, led to a smaller amplitude of the auditory N100 (i.e., less negative) compared to the condition in which both signals were congruent. Moreover, the incongruity between the visual information and the rare sound did not significantly affect reaction times, suggesting that the rare sound was neglected. We propose that the lower N100 amplitude reflects an early visual-to-auditory gating that depends on the rarity of the sound. In complex aircraft environments, this early effect might be partly responsible for auditory alarm insensitivity. Our results provide a new basis for future aeronautic studies and the development of countermeasures.
Many events from daily life are audiovisual (AV). Handclaps produce both visual and acoustic signals that are transmitted in air and processed by our sensory systems at different speeds, reaching the brain multisensory integration areas at different moments. Signals must somehow be associated in time to correctly perceive synchrony. This project aims at quantifying the mutual temporal attraction between senses and characterizing the different interaction modes depending on the offset. In every trial participants saw four beep-flash pairs regularly spaced in time, followed after a variable delay by a fifth event in the test modality (auditory or visual). A large range of AV offsets was tested. The task was to judge whether the last event came before/after what was expected given the perceived rhythm, while attending only to the test modality. Flashes were perceptually shifted in time toward beeps, the attraction being stronger for lagging than leading beeps. Conversely, beeps were not shifted toward flashes, indicating a nearly total auditory capture. The subjective timing of the visual component resulting from the AV interaction could easily be forward but not backward in time, an intuitive constraint stemming from minimum visual processing delays. Finally, matching auditory and visual time-sensitivity with beeps embedded in pink noise produced very similar mutual attractions of beeps and flashes. Breaking the natural auditory preference for timing allowed vision to take over as well, showing that this preference is not hardwired. PMID:28207786
Armstrong, Alan; Issartel, Johann
Understanding how we synchronize our actions with stimuli from different sensory modalities plays a central role in helping to establish how we interact with our multisensory environment. Recent research has shown better performance with multisensory over unisensory stimuli; however, the type of stimuli used has mainly been auditory and tactile. The aim of this article was to expand our understanding of sensorimotor synchronization with multisensory audio-visual stimuli and compare these findings to their individual unisensory counterparts. This research also aims to assess the role of spatio-temporal structure for each sensory modality. The visual and/or auditory stimuli had either temporal or spatio-temporal information available and were presented to the participants in unimodal and bimodal conditions. Globally, the performance was significantly better for the bimodal compared to the unimodal conditions; however, this benefit was limited to only one of the bimodal conditions. In terms of the unimodal conditions, the level of synchronization with visual stimuli was better than auditory, and while there was an observed benefit with the spatio-temporal compared to temporal visual stimulus, this was not replicated with the auditory stimulus.
Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru
Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407
Aerobic, anaeorbic and facultative treatments as well as bioremediation (land farming, air sparging, bio-cell, bio-reactor and phytoremediation) are explored with the aid of animation and electron microscope imagery.
Doyle-Thomas, Krissy A R; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B C
Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system.
Macaluso, E; George, N; Dolan, R; Spence, C; Driver, J
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.
Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.
Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139
Curtis, J A; Davison, F M
The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal.
Lewkowicz, David J.
Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…
Taylor, Natalie; Isaac, Claire; Milne, Elizabeth
This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…
Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás
Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…
... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or may contain copyrighted material are provided to you if you seek the release of such materials in...
... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or may contain copyrighted material are provided to you if you seek the release of such materials in...
... copied as follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines... audiovisual records prepared for dissemination abroad that NARA determines may have copyright protection or may contain copyrighted material are provided to you if you seek the release of such materials in...
Alm, Magnus; Behne, Dawn
Gender and age have been found to affect adults' audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years) and middle-aged adults (50-60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females' general AV perceptual strategy. Although young females' speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses.
Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre
Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode.
Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre
Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.
Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a
Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane
This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.
Adams, Wendy J
Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4-11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality-they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or auditory
PUPULIM, Guilherme Luiz Lenzi; IORIS, Rafael Augusto; GAMA, Ricardo Ribeiro; RIBAS, Carmen Australia Paredes Marcondes; MALAFAIA, Osvaldo; GAMA, Mirnaluci
Background: The development of didactic means to create opportunities to permit complete and repetitive viewing of surgical procedures is of great importance nowadays due to the increasing difficulty of doing in vivo training. Thus, audiovisual resources favor the maximization of living resources used in education, and minimize problems arising only with verbalism. Aim: To evaluate the use of digital video as a pedagogical strategy in surgical technique teaching in medical education. Methods: Cross-sectional study with 48 students of the third year of medicine, when studying in the surgical technique discipline. They were divided into two groups with 12 in pairs, both subject to the conventional method of teaching, and one of them also exposed to alternative method (video) showing the technical details. All students did phlebotomy in the experimental laboratory, with evaluation and assistance of the teacher/monitor while running. Finally, they answered a self-administered questionnaire related to teaching method when performing the operation. Results: Most of those who did not watch the video took longer time to execute the procedure, did more questions and needed more faculty assistance. The total exposed to video followed the chronology of implementation and approved the new method; 95.83% felt able to repeat the procedure by themselves, and 62.5% of those students that only had the conventional method reported having regular capacity of technique assimilation. In both groups mentioned having regular difficulty, but those who have not seen the video had more difficulty in performing the technique. Conclusion: The traditional method of teaching associated with the video favored the ability to understand and transmitted safety, particularly because it is activity that requires technical skill. The technique with video visualization motivated and arouse interest, facilitated the understanding and memorization of the steps for procedure implementation, benefiting the
Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial
Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane
This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre. PMID:24910604
... concerns. Search Services Share This Help National HIV/AIDS Strategy Check out NHAS's latest progress in the ... from AIDS.gov Read more AIDS.gov tweets AIDS.gov HIV/AIDS Basics • Federal Resources • Using New ...
The Institute for the Achievement of Human Potential developed a device known as the Vehicle for Initial Crawling (VIC); the acronym is a tribute to the crawler's inventor, Hubert "Vic" Vykukal; is an effective crawling aid. The VIC is used by brain injured children who are unable to crawl due to the problems of weight-bearing and friction, caused by gravity. It is a rounded plywood frame large enough to support the child's torso, leaving arms and legs free to move. On its underside are three aluminum discs through which air is pumped to create an air-bearing surface that has less friction than a film of oil. Upper side contains the connection to the air supply and a pair of straps which restrain the child and cause the device to move with him. VIC is used with the intent to recreate the normal neurological connection between brain and muscles. Over repetitive use of the device the child develops his arm and leg muscles as well as coordination. Children are given alternating therapy, with and without the VIC until eventually the device is no longer needed.
First aid - heart attack; First aid - cardiopulmonary arrest; First aid - cardiac arrest ... of patients with unstable angina/non-ST-elevation myocardial infarction (updating the 2007 guideline and replacing the 2011 ...
... aids typically cannot be custom-fit. What are costs and styles of hearing aids? Hearing aids vary ... and for improvement in hearing tones. Real ear measurements may also be done, which determine how much ...
... Patient & Caregiver Education » Fact Sheets Neurological Complications of AIDS Fact Sheet Table of Contents (click to jump ... Where can I get more information? What is AIDS? AIDS (acquired immune deficiency syndrome) is a condition ...
... dientes Video: Getting an X-ray HIV and AIDS KidsHealth > For Kids > HIV and AIDS Print A ... serious infection. continue How Many People Have HIV/AIDS? Since the discovery of the virus in 1983, ...
Coyle, E F
The catabolism of bodily fuels provides the energy for muscular work. Work output can be limited by the size of fuel reserves, the rate of their catabolism, the build-up of by-products, or the neurologic activation of muscle. A substance that favorably affects a step that is normally limiting, and thus increases work output, can be considered an ergogenic aid. The maximal amount of muscular force generated during brief contractions can be acutely increased during hypnosis and with the ingestion of a placebo or psychomotor stimulant. This effect is most obvious in subjects under laboratory conditions and is less evident in athletes who are highly motivated prior to competition. Fatigue is associated with acidosis in the working musculature when attempts are made to maximize work output during a 4 to 15-minute period. Sodium bicarbonate ingestion may act to buffer the acid produced, provided that blood flow to the muscle is adequate. Prolonged intense exercise can be maintained for approximately two hours before carbohydrate stores become depleted. Carbohydrate feedings delay fatigue during prolonged exercise, especially in subjects who display a decline in blood glucose during exercise in the fasting state. Caffeine ingestion prior to an endurance bout has been reported to allow an individual to exercise somewhat more intensely than he or she would otherwise. Its effect may be mediated by augmenting fat metabolism or by altering the perception of effort. Amphetamines may act in a similar manner. Water ingestion during prolonged exercise that results in dehydration and hyperthermia can offset fluid losses and allow an individual to better maintain work output while substantially reducing the risk of heat-related injuries.
Malkames, James P.; And Others
This bibliography represents a collection of 1,300 book volumes and audiovisual materials collected by the Luzerne County Community College Library in support of the college's Hotel and Restaurant Management curriculum. It covers such diverse topics as advertising, business practices, decoration, nutrition, hotel law, insurance landscaping, health…
Lindsay, E.; Good, M.
Remote and virtual laboratory classes are an increasingly prevalent alternative to traditional hands-on laboratory experiences. One of the key issues with these modes of access is the provision of adequate audiovisual (AV) feedback to the user, which can be a complicated and resource-intensive challenge. This paper reports on a comparison of two…
Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.
Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…
... practices. (b) Protect audiovisual records, including those recorded on digital media or magnetic sound or video media, from accidental or deliberate alteration or erasure. (c) If different versions of... records (e.g., for digital files, use file naming conventions), that clarify connections between...
Lee, Hweeling; Noppeney, Uta
This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech. PMID:25147539
Rowe, Sue Ellen, Comp.
Audiovisual materials suitable for the teaching of nutrition are listed. Materials include coloring books, flannelboard stories, games, kits, audiotapes, records, charts, posters, study prints, films, videotapes, filmstrips, slides, and transparencies. Each entry contains bibliographic data, educational level, price and evaluation. Mateiral is…
National Nutrition Education Clearing House, Berkeley, CA.
This bibliography contains reviews of more than 250 audiovisual materials in eight subject areas related to nutrition: (1) general nutrition; (2) life cycle; (3) diet/health and disease; (4) health and athletics; (5) food - general; (6) food preparation and service; (7) food habits and preferences; and (8) food economics and concerns. Materials…
Ikumi, Nara; Soto-Faraco, Salvador
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529
Waffen, Leslie; And Others
A representative selection of the National Archives and Records Services' audiovisual collection relating to black history is presented. The intention is not to provide an exhaustive survey, but rather to indicate the breadth and scope of materials available for study and to suggest areas for concentrated research. The materials include sound…
Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.
Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…
Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo
Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…
Faris, Gene; Sherman, Mendel
Quantitative guidelines for use in determining the audiovisual (AV) needs of educational institutions were developed by the Octobe r 14-16, 1965 Seminar of the NDEA (National Defense Education Act), Faris-Sherman study. The guidelines that emerged were based in part on a review of past efforts and existing standards but primarily reflected the…
Based mainly on the experiences of seven Dutch public libraries during a 5-year study, this report discusses issues related to the integration of audiovisual materials and equipment into existing library collections. Following a brief introduction, its contents are divided into chapters: (1) Collection Building; (2) Processing; (3) Cataloguing;…
Thompson, Anthony Hugh
Designed to provide the librarian with suggestions and guidelines for storing and preserving audiovisual materials, this pamphlet is divided into four major chapters: (1) Normal Use Storage Conditions; (2) Natural Lifetime, Working Lifetime and Long-Term Storage; (3) Handling; and (4) Shelving of Normal Use Materials. Topics addressed include:…
COBUN, TED; AND OTHERS
THIS DOCUMENT IS A STAGE IN A STUDY TO FORMULATE QUANTITATIVE GUIDELINES FOR THE AUDIO-VISUAL COMMUNICATIONS FIELD, BEING CONDUCTED BY DOCTORS GENE FARIS AND MENDEL SHERMAN UNDER A NATIONAL DEFENSE EDUCATION ACT CONTRACT. THE STANDARDS LISTED HERE HAVE BEEN OFFICIALLY APPROVED AND ADOPTED BY SEVERAL AGENCIES, INCLUDING THE DEPARTMENT OF…
Albert, Richard N.
Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…
Lowry, Cheryl Meredith; And Others
Transcripts for each of four audiovisual presentations, components of the Career Planning Support System (CPSS), are contained in this package. (CPSS is a comprehensive guidance program management system designed to provide information for local high schools to design, implement, and evaluate an upgraded career guidance program. CPSS describes how…
Mercier, Cletus R.
This paper describes the adaptation of instructional modules and audio-visual tapes for an engineering graphics course to a course entitled "Technical Drawing for Applied Art." This is a service course taught by the engineering department for the applied art department in the College of Home Economics. A separate problem package was utilized with…
Evans, Shirley King
This annotated bibliography contains 327 citations from AGRICOLA, the U.S. Department of Agriculture database, dating from January 1979 through May 1990. The bibliography cites books, print materials, and audiovisual materials on the subject of nutrition education for grades preschool through six. Each citation contains complete bibliographic…
Casado, Maria Isabel; Castano, Gloria; Arraez-Aybar, Luis Alfonso
This study presents the design, effect and utility of using audiovisual material containing real images of dissected human cadavers as an innovative educational strategy (IES) in the teaching of Human Anatomy. The goal is to familiarize students with the practice of dissection and to transmit the importance and necessity of this discipline, while…
Benoît, C; Mohamadi, T; Kandel, S
Bimodal perception leads to better speech understanding than auditory perception alone. We evaluated the overall benefit of lip-reading on natural utterances of French produced by a single speaker. Eighteen French subjects with good audition and vision were administered a closed set identification test of VCVCV nonsense words consisting of three vowels [i, a, y] and six consonants [b, v, z, 3, R, l]. Stimuli were presented under both auditory and audio-visual conditions with white noise added at various signal-to-noise ratios. Identification scores were higher in the bimodal condition than in the auditory-alone condition, especially in situations where acoustic information was reduced. The auditory and audio-visual intelligibility of the three vowels [i, a, y] averaged over the six consonantal contexts was evaluated as well. Two different hierarchies of intelligibility were found. Auditorily, [a] was most intelligible, followed by [i] and then by [y]; whereas visually [y] was most intelligible, followed by [a] and [i]. We also quantified the contextual effects of the three vowels on the auditory and audio-visual intelligibility of the consonants. Both the auditory and the audio-visual intelligibility of surrounding consonants was highest in the [a] context, followed by the [i] context and lastly the [y] context.
National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.
This publication is a catalog that contains the National Audiovisual Center's materials on Science. There are twelve areas in this catalog: Aerospace Technology, Astronomy, Biology, Chemistry, Electronics and Electricity, Energy, Environmental Studies, Geology, Mathematics and Computer Science, Oceanography, Physics, and Weather/Meteorology. Each…
Aguaded-Gomez, Ignacio; Perez-Rodriguez, M. Amor
Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today's digital society (society-network), where information and communication…
Education, Audiovisual and Culture Executive Agency, European Commission, 2011
The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…
Keitel, Christian; Müller, Matthias M
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
MacGregor, Alex R.
The process of purchasing audiovisual hardware should eventually arrive at a point where the user's requirements and the industries' standards and guidelines correlate, keeping budgets in mind. Teachnical personnel should exchange more information on their decision making preferably through small discussion groups, or users should unite and…
Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.
Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379
Schwartz, Jean-Luc; Berthommier, Frederic; Savariaux, Christophe
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances "sensitivity" to acoustic information,…
Olube, Friday K.
The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…
van Eijk, Rob L. J.; Kohlrausch, Armin; Juola, James F.; van de Par, Steven
Audio-visual stimulus pairs presented at various relative delays, are commonly judged as being "synchronous" over a range of delays from about -50 ms (audio leading) to +150 ms (video leading). The center of this range is an estimate of the point of subjective simultaneity (PSS). The judgment boundaries, where "synchronous" judgments yield to a…
Levine, Linda New
This paper describes some teaching techniques developed in the author's middle school and high school ESL classes. The techniques described here use audio-visual devices and student production of media as a motivational tool as well as a method of providing for spontaneous language practice and communicative competence. Some of the techniques and…
Pieper, William J.; And Others
Two Automated Apprenticeship Training (AAT) courses were developed for Air Force Security Police Law Enforcement and Security specialists. The AAT was a systematized audio-visual approach to self-paced job training employing an easily operated teaching device. AAT courses were job specific and based on a behavioral task analysis of the two…
Edlund, Jens; Beskow, Jonas
Evaluation of methods and techniques for conversational and multimodal spoken dialogue systems is complex, as is gathering data for the modeling and tuning of such techniques. This article describes MushyPeek, an experiment framework that allows us to manipulate the audiovisual behavior of interlocutors in a setting similar to face-to-face…
A description of the syllabus of a course in ancient and modern tragedy given in English at the University of Illinois. An annotated list of the plays studied, suggested films, recordings and commentaries, and sources for audio-visual materials are included. (AMH)
Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…
Ketzer, Jan W.
Audiovisual, or mass media education can play a significant role in children's social, emotional, cognitive, sensory, motor, and creative development. It includes all school activities which teach children to interact with The field includes all school activities which teach children to interact with visualize ideas. Students can be involved in…
This paper reports a controlled field experiment conducted to determine the effects and interaction of five independent variables with an audiovisual slide-tape program: presence of learning objectives, location of learning objectives, type of knowledge, sex of learner, and retention of learning. Participants were university students in a general…
Harris, Nancy G., Comp.
Titles and prices for filmstrips with records, filmstrips, films, cassettes, film loops, disc recordings for utilization as audiovisual resources for library services for the mentally retarded are listed. A list of publishers and distributors of suitable educational and library materials is also provided. (AB)
Rousseau, Isabelle; Onslow, Mark; Packman, Ann; Jones, Mark
Purpose: To determine whether measures of stuttering frequency and measures of overall stuttering severity in preschoolers differ when made from audio-only recordings compared with audiovisual recordings. Method: Four blinded speech-language pathologists who had extensive experience with preschoolers who stutter measured stuttering frequency and…
Recent experimentation with audio-visual (A-V) materials has provided insight into the language learning process. Researchers and teachers alike have recognized the importance of using A-V materials to achieve goals related to meaningful and relevant communication, retention and recall of language items, non-verbal aspects of communication, and…
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... advertising. In the case of advertisements for smokeless tobacco on videotapes, casettes, or...
Encyclopaedia Britannica, Inc., Chicago, IL.
This catalogue of educational films and other audiovisual materials consists predominantly of films in Spanish and English which are intended for use in elementary and secondary schools. A wide variety of topics including films for social studies, language arts, humanities, physical and natural sciences, safety and health, agriculture, physical…
Magnan, Annie; Ecalle, Jean; Veuillet, Evelyne; Collet, Lionel
A research project was conducted in order to investigate the usefulness of intensive audio-visual training administered to children with dyslexia involving daily voicing exercises. In this study, the children received such voicing training (experimental group) for 30 min a day, 4 days a week, over 5 weeks. They were assessed on a reading task…
Federal Aviation Administration (DOT), Washington, DC. Office of Public Affairs.
Currently available aviation education resource materials are listed alphabetically by title under four headings: (1) career information; (2) audiovisual materials; (3) publications; and (4) periodicals. Each entry includes: title; format (16mm film, slides, slide/tape presentation, VHS/Beta videotape, book, booklet, newsletter, pamphlet, poster,…
Designed to help instructors choose appropriate audio-visual materials for the basic course in photojournalism, this bibliography contains 11 annotated entries. Annotations include the name of the materials, running time, whether black-and-white or color, and names of institutions from which the materials can be secured, as well as brief…
Audiovisual Translation (AVT) and Assistive Technology (AST) are two fields that share common grounds within accessibility-related research, yet they are rarely studied in combination. The reason most often lies in the fact that they have emerged from different disciplines, i.e. Translation Studies and Computer Science, making a possible combined…
Imani, Sahar Sadat Afshar
Modular EFL Educational Program has managed to offer specialized language education in two specific fields: Audio-visual Materials Translation and Translation of Deeds and Documents. However, no explicit empirical studies can be traced on both internal and external validity measures as well as the extent of compatibility of both courses with the…
Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo
Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of…
This article discusses the exceptional nature of the description of moving images for television archives, deriving from their audiovisual nature, and of the specifications in the queries of journalists as users of the Document Information System. It is suggested that there is a need to control completely "Anonymous Groups"--groups without any…
Haberbosch, John F.; And Others
Readings and audiovisual materials, selected especially for educators, related to the study of Afro-American, Hispano-American, and American Indian cultures are included in this 366-item annotated bibliography covering the period from 1861 to 1968. Historical, cultural, and biographical materials are included for each of the three cultures as well…
Federal Audiovisual Policy Act. Hearing before a Subcommittee of the Committee on Government Operations, House of Representatives, Ninety-Eighth Congress, Second Session on H.R. 3325 to Establish in the Office of Management and Budget an Office to Be Known as the Office of Federal Audiovisual Policy, and for Other Purposes.
Congress of the U. S., Washington, DC. House Committee on Government Operations.
The views of private industry and government are offered in this report of a hearing on the Federal Audiovisual Policy Act, which would establish an office to coordinate federal audiovisual activity and require most audiovisual material produced for federal agencies to be acquired under contract from private producers. Testimony is included from…
Curran, James R.
As early as the 1930s the term Master Hearing Aid (MHA) described a device used in the fitting of hearing aids. In their original form, the MHA was a desktop system that allowed for simulated or actual adjustment of hearing aid components that resulted in a changed hearing aid response. Over the years the MHA saw many embodiments and contributed to a number of rationales for the fitting of hearing aids. During these same years, the MHA was viewed by many as an inappropriate means of demonstrating hearing aids; the audio quality of the desktop systems was often superior to the hearing aids themselves. These opinions and the evolution of the MHA have molded the modern perception of hearing aids and the techniques used in the fitting of hearing aids. This article reports on a history of the MHA and its influence on the fitting of hearing aids. PMID:23686682
... Consumer Products Hearing Aids How to get Hearing Aids Share Tweet Linkedin Pin it More sharing options ... my hearing aids? How do I get hearing aids? Before getting a hearing aid, you should consider ...
... elements that are needed for future preservation, duplication, and reference for audiovisual records... captioning information may be maintained in another file such as a database if the file number correlation...
Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor
The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.
Spires, Norman S.
Article comments on the present needs of teachers in training where audiovisual matters, including radio broadcasting, are concerned and outlines the way in which such training takes place at Southlands College and the objectives sought. (Author)
Wu, Jinglong; Yang, Jiajia; Yu, Yinghua; Li, Qi; Nakamura, Naoya; Shen, Yong; Ohta, Yasuyuki; Yu, Shengyuan; Abe, Koji
The human brain can anatomically combine task-relevant information from different sensory pathways to form a unified perception; this process is called multisensory integration. The aim of the present study was to test whether the multisensory integration abilities of patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD) differed from those of normal aged controls (NC). A total of 64 subjects were divided into three groups: NC individuals (n = 24), MCI patients (n = 19), and probable AD patients (n = 21). All of the subjects were asked to perform three separate audiovisual integration tasks and were instructed to press the response key associated with the auditory, visual, or audiovisual stimuli in the three tasks. The accuracy and response time (RT) of each task were measured, and the RTs were analyzed using cumulative distribution functions to observe the audiovisual integration. Our results suggest that the mean RT of patients with AD was significantly longer than those of patients with MCI and NC individuals. Interestingly, we found that patients with both MCI and AD exhibited adequate audiovisual integration, and a greater peak (time bin with the highest percentage of benefit) and broader temporal window (time duration of benefit) of multisensory enhancement were observed. However, the onset time and peak benefit of audiovisual integration in MCI and AD patients occurred significantly later than did those of the NC. This finding indicates that the cognitive functional deficits of patients with MCI and AD contribute to the differences in performance enhancements of audiovisual integration compared with NC.
Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa
To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.
Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa
To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals.
Kleinke, C L; Spangler, A S
Sixty chronic back-pain patients were administered the audiovisual taxonomy of pain behavior during their first and last weeks in an inpatient multidisciplinary pain clinic. Audiovisual total score provided a useful index of pain behavior with a suitable frequency and reliability, while offering unique variance as a measure of treatment outcome. Patients' pain behaviors upon admission to the pain program were positively correlated with the following background variables: receiving worker's compensation, pounds overweight, and number of back surgeries. Patients' pain behaviors upon completion of the pain program were significantly correlated with their preferences for pain treatment modalities. High levels of pain behavior correlated with a preference for treatments of ice and heat. Low levels of pain behavior correlated with a preference for physical therapy, social work, lectures, and relaxation. It was suggested that treatment outcome in a multidisciplinary pain clinic is more immediately related to patients' coping styles and their choice of pain treatment modalities than to their demographics and personalities.
Lewkowicz, David J.; Flom, Ross
Binding is key in multisensory perception. This study investigated the audio-visual temporal binding window in 4-, 5-, and 6-year-old children (total N=120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked whether the voice and face went together (Experiment 1) or whether the desynchronized videos differed from the synchronized one (Experiment 2). Four-year-olds detected the 666 ms asynchrony, 5-year-olds detected the 666 and 500 ms asynchrony, and 6-year-olds detected all asynchronies. These results show that the audio-visual temporal binding window narrows slowly during early childhood and that it is still wider at six years of age than in older children and adults. PMID:23888869
Prabhakar, A R; Marwah, N; Raju, O S
Pain is not the sole reason for fear of dentistry. Anxiety or the fear of unknown during dental treatment is a major factor and it has been the major concern for dentists for a long time. Therefore, the main aim of this study was to evaluate and compare the two distraction techniques, viz, audio distraction and audiovisual distraction, in management of anxious pediatric dental patients. Sixty children aged between 4-8 years were divided into three groups. Each child had four dental visits--screening visit, prophylaxis visit, cavity preparation and restoration visit, and extraction visit. Child's anxiety level in each visit was assessed using a combination of four measures: Venham's picture test, Venham's rating of clinical anxiety, pulse rate, and oxygen saturation. The values obtained were tabulated and subjected to statistical analysis. It was concluded that audiovisual distraction technique was more effective in managing anxious pediatric dental patient as compared to audio distraction technique.
Molholm, Sophie; Sehatpour, Pejman; Mehta, Ashesh D; Shpaner, Marina; Gomez-Ramirez, Manuel; Ortigue, Stephanie; Dyke, Jonathan P; Schwartz, Theodore H; Foxe, John J
Intracranial recordings from three human subjects provide the first direct electrophysiological evidence for audio-visual multisensory processing in the human superior parietal lobule (SPL). Auditory and visual sensory inputs project to the same highly localized region of the parietal cortex with auditory inputs arriving considerably earlier (30 ms) than visual inputs (75 ms). Multisensory integration processes in this region were assessed by comparing the response to simultaneous audio-visual stimulation with the algebraic sum of responses to the constituent auditory and visual unisensory stimulus conditions. Significant integration effects were seen with almost identical morphology across the three subjects, beginning between 120 and 160 ms. These results are discussed in the context of the role of SPL in supramodal spatial attention and sensory-motor transformations.
Meyur, R; Mitra, B; Adhikari, A; Mitra, D; Biswas, S; Sadhu, A
Nowadays medical teachers use different audiovisual (AV) aids of teaching in their classes to make the subject more interesting and understandable. To assess the impact of three common lecture delivery methods, viz Blackboard (BB), Transparency and Over Head Projector (OHP) and Powerpoint Presentation (PP), a questionnaire based study was carried out among first year MBBS students of R.G.Kar Medical College, Kolkata. One hundred fouty students of academic session 2010-2011 were exposed to different aids of teaching, viz. Black Board (BB), Over Head Projector (OHP), power point presentation (PP) for ten months. They were taught Anatomy by different teachers who used all the three AV aids in their lectures. Then they were asked to respond to a questionnaire regarding these three AV aids of teaching. The students preferred Black Board teaching over OHP and result was statistically significant (p value < 0.0001). BB teaching was also preferred over PP presentations (p < 0.02). But in comparison to OHP, students preferred PP though the difference is not statistically significant (p < 0.10). Most of the students still prefer Black Board teaching to other modern AV aids like OHP and PP. For better understanding of a subject by students and improvement of their performance, a teacher should match the lectures with preferred AV aids and use the AV aids prudently.
Morís Fernández, Luis; Visser, Maya; Ventura-Campos, Noelia; Ávila, César; Soto-Faraco, Salvador
The interplay between attention and multisensory integration has proven to be a difficult question to tackle. There are almost as many studies showing that multisensory integration occurs independently from the focus of attention as studies implying that attention has a profound effect on integration. Addressing the neural expression of multisensory integration for attended vs. unattended stimuli can help disentangle this apparent contradiction. In the present study, we examine if selective attention to sound pitch influences the expression of audiovisual integration in both behavior and neural activity. Participants were asked to attend to one of two auditory speech streams while watching a pair of talking lips that could be congruent or incongruent with the attended speech stream. We measured behavioral and neural responses (fMRI) to multisensory stimuli under attended and unattended conditions while physical stimulation was kept constant. Our results indicate that participants recognized words more accurately from an auditory stream that was both attended and audiovisually (AV) congruent, thus reflecting a benefit due to AV integration. On the other hand, no enhancement was found for AV congruency when it was unattended. Furthermore, the fMRI results indicated that activity in the superior temporal sulcus (an area known to be related to multisensory integration) was contingent on attention as well as on audiovisual congruency. This attentional modulation extended beyond heteromodal areas to affect processing in areas classically recognized as unisensory, such as the superior temporal gyrus or the extrastriate cortex, and to non-sensory areas such as the motor cortex. Interestingly, attention to audiovisual incongruence triggered responses in brain areas related to conflict processing (i.e., the anterior cingulate cortex and the anterior insula). Based on these results, we hypothesize that AV speech integration can take place automatically only when both
... the neurological complications of AIDS. Some disorders require aggressive therapy while others are treated symptomatically. Medicines range ... certain bacterial infections, and penicillin to treat neurosyphilis. Aggressive antiretroviral therapy is used to treat AIDS dementia ...
Loss of consciousness - first aid; Coma - first aid; Mental status change; Altered mental status ... person is unconscious and: Does not return to consciousness quickly (within a minute) Has fallen down or ...
... NEI for Kids > First Aid Tips All About Vision About the Eye Ask a Scientist Video Series ... Eye Health and Safety First Aid Tips Healthy Vision Tips Protective Eyewear Sports and Your Eyes Fun ...
... do the following: Assist clients in their daily personal tasks, such as bathing or dressing Provide basic ... social networks and communities Home health aides, unlike personal care aides , typically work for certified home health ...
Calabrese, Leonard H.; Kelley, Dennis
This article discusses the onset and progression of AIDS, its importance as a public health issue, and reducing the risk of AIDS transmission among athletes and those who work with them, including team physicians and athletic trainers. (IAH)
... at risk for serious infections and certain cancers. AIDS stands for acquired immunodeficiency syndrome. It is the final stage of infection with HIV. Not everyone with HIV develops AIDS. HIV most often spreads through unprotected sex with ...
... yeast infection (thrush) Shingles (herpes zoster) Progression to AIDS If you receive no treatment for your HIV ... childbirth or breast-feeding. How does HIV become AIDS? HIV destroys CD4 cells — a specific type of ...
... Laotian Mongolian Spanish Turkish Vietnamese Hindi Subscribe HIV/AIDS Coinfection Approximately 10% of the HIV-infected population ... Control and Prevention website to learn about HIV/AIDS and Viral Hepatitis guidelines and resources. Home About ...
Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki
Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.
Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W; Van der Smagt, M J
A factor that is often not considered in multisensory research is the distance from which information is presented. Interestingly, various studies have shown that the distance at which information is presented can modulate the strength of multisensory interactions. In addition, our everyday multisensory experience in near and far space is rather asymmetrical in terms of retinal image size and stimulus intensity. This asymmetry is the result of the relation between the stimulus-observer distance and its retinal image size and intensity: an object that is further away is generally smaller on the retina as compared to the same object when it is presented nearer. Similarly, auditory intensity decreases as the distance from the observer increases. We investigated how each of these factors alone, and their combination, affected audiovisual integration. Unimodal and bimodal stimuli were presented in near and far space, with and without controlling for distance-dependent changes in retinal image size and intensity. Audiovisual integration was enhanced for stimuli that were presented in far space as compared to near space, but only when the stimuli were not corrected for visual angle and intensity. The same decrease in intensity and retinal size in near space did not enhance audiovisual integration, indicating that these results cannot be explained by changes in stimulus efficacy or an increase in distance alone, but rather by an interaction between these factors. The results are discussed in the context of multisensory experience and spatial uncertainty, and underline the importance of studying multisensory integration in the depth space.
Pons, Ferran; Lewkowicz, David J
We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.
Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki
Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception. PMID:27734953
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal. PMID:27551361
Butler, Andrew J; James, Thomas W; James, Karin Harman
Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.
García-Pérez, Miguel A; Alcalá-Quintana, Rocío
Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.
Cui, Guoqiang; Gopalan, Siddharth; Yamamoto, Tokihiro; Berger, Jonathan; Maxim, Peter G; Keall, Paul J
A respiratory training system based on audiovisual biofeedback has been implemented at our institution. It is intended to improve patients' respiratory regularity during four-dimensional (4D) computed tomography (CT) image acquisition. The purpose is to help eliminate the artifacts in 4D-CT images caused by irregular breathing, as well as improve delivery efficiency during treatment, where respiratory irregularity is a concern. This article describes the commissioning and quality assurance (QA) procedures developed for this peripheral respiratory training system, the Stanford Respiratory Training (START) system. Using the Varian real-time position management system for the respiratory signal input, the START software was commissioned and able to acquire sample respiratory traces, create a patient-specific guiding waveform, and generate audiovisual signals for improving respiratory regularity. Routine QA tests that include hardware maintenance, visual guiding-waveform creation, auditory sounds synchronization, and feedback assessment, have been developed for the START system. The QA procedures developed here for the START system could be easily adapted to other respiratory training systems based on audiovisual biofeedback.
Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun
The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651
Marchant, Jennifer L; Driver, Jon
Understanding how the brain extracts and combines temporal structure (rhythm) information from events presented to different senses remains unresolved. Many neuroimaging beat perception studies have focused on the auditory domain and show the presence of a highly regular beat (isochrony) in "auditory" stimulus streams enhances neural responses in a distributed brain network and affects perceptual performance. Here, we acquired functional magnetic resonance imaging (fMRI) measurements of brain activity while healthy human participants performed a visual task on isochronous versus randomly timed "visual" streams, with or without concurrent task-irrelevant sounds. We found that visual detection of higher intensity oddball targets was better for isochronous than randomly timed streams, extending previous auditory findings to vision. The impact of isochrony on visual target sensitivity correlated positively with fMRI signal changes not only in visual cortex but also in auditory sensory cortex during audiovisual presentations. Visual isochrony activated a similar timing-related brain network to that previously found primarily in auditory beat perception work. Finally, activity in multisensory left posterior superior temporal sulcus increased specifically during concurrent isochronous audiovisual presentations. These results indicate that regular isochronous timing can modulate visual processing and this can also involve multisensory audiovisual brain mechanisms.
Kopp, Franziska; Dietrich, Claudia
Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071
Ohki, Takefumi; Gunji, Atsuko; Takei, Yuichi; Takahashi, Hidetoshi; Kaneko, Yuu; Kita, Yosuke; Hironaga, Naruhito; Tobimatsu, Shozo; Kamio, Yoko; Hanakawa, Takashi; Inagaki, Masumi; Hiraki, Kazuo
Though recent studies have elucidated the earliest mechanisms of processing in multisensory integration, our understanding of how multisensory integration of more sustained and complicated stimuli is implemented in higher-level association cortices is lacking. In this study, we used magnetoencephalography (MEG) to determine how neural oscillations alter local and global connectivity during multisensory integration processing. We acquired MEG data from 15 healthy volunteers performing an audio-visual speech matching task. We selected regions of interest (ROIs) using whole brain time-frequency analyses (power spectrum density and wavelet transform), then applied phase amplitude coupling (PAC) and imaginary coherence measurements to them. We identified prominent delta band power in the temporal pole (TP), and a remarkable PAC between delta band phase and beta band amplitude. Furthermore, imaginary coherence analysis demonstrated that the temporal pole and well-known multisensory areas (e.g., posterior parietal cortex and post-central areas) are coordinated through delta-phase coherence. Thus, our results suggest that modulation of connectivity within the local network, and of that between the local and global network, is important for audio-visual speech integration. In short, these neural oscillatory mechanisms within and between higher-level association cortices provide new insights into the brain mechanism underlying audio-visual integration. PMID:27897244
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919
Kim, Jongwan; Wang, Jing; Wedell, Douglas H; Shinkareva, Svetlana V
Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli.
Bigliassi, Marcelo; Silva, Vinícius B; Karageorghis, Costas I; Bird, Jonathan M; Santos, Priscila C; Altimari, Leandro R
Motivational audiovisual stimuli such as music and video have been widely used in the realm of exercise and sport as a means by which to increase situational motivation and enhance performance. The present study addressed the mechanisms that underlie the effects of motivational stimuli on psychophysiological responses and exercise performance. Twenty-two participants completed fatiguing isometric handgrip-squeezing tasks under two experimental conditions (motivational audiovisual condition and neutral audiovisual condition) and a control condition. Electrical activity in the brain and working muscles was analyzed by use of electroencephalography and electromyography, respectively. Participants were asked to squeeze the dynamometer maximally for 30s. A single-item motivation scale was administered after each squeeze. Results indicated that task performance and situational motivational were superior under the influence of motivational stimuli when compared to the other two conditions (~20% and ~25%, respectively). The motivational stimulus downregulated the predominance of low-frequency waves (theta) in the right frontal regions of the cortex (F8), and upregulated high-frequency waves (beta) in the central areas (C3 and C4). It is suggested that motivational sensory cues serve to readjust electrical activity in the brain; a mechanism by which the detrimental effects of fatigue on the efferent control of working muscles is ameliorated.
Kim, Jongwan; Wang, Jing; Wedell, Douglas H.
Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli. PMID:27598534
Nowak, Geraldine D.
National Library of Medicine (NLM) Literature Searches are selected computer-generated bibliographies produced by the Library's Medical Literature Analysis and Retrieval System (MEDLARS). Selection is made on the basis of a significant current interest of the subject matter to a substantial audience of health professionals. Each Literature Search…
Nowak, Geraldine D.
National Library of Medicine (NLM) Literature Searches are selected computer-generated bibliographies produced by the Library's Medical Literature Analysis and Retrieval System (MEDLARS). Selection is made on the basis of a significant current interest of the subject matter to a substantial audience of health professionals. Each Literature Search…
Mathew, Nalliveettil George; Alidmat, Ali Odeh Hammoud
A resourceful English language teacher equipped with eclecticism is desirable in English as a foreign language classroom. The challenges of classroom instruction increases when prescribed English as a Foreign Language (EFL) course books (textbooks) are constituted with too many interactive language proficiency activities. Most importantly, it has…
Wooten, Donald B.
Examines the increase in AIDS patients in rural California, which is greater than that in urban areas, including AIDS population projections through 1991. Describes differences between AIDS populations in rural and urban areas and relates these to state expenditure patterns and differential needs. (DHP)
Slesnick, Irwin L.
Focuses on public education about the acquired immune deficiency syndrome (AIDS) epidemic. Discusses the problems of a second epidemic of fear and anxiety. Presents several questions for classroom discussion and analysis of the public fear of AIDS. Gives some statistics highlighting misinformation about AIDS. (CW)
First aid Stroke: First aid Stroke: First aid By Mayo Clinic Staff A stroke occurs when there's bleeding into your brain or when normal blood flow to ... next several hours. Seek immediate medical assistance. A stroke is a true emergency. The sooner treatment is ...
Tilaro, Angie; Rossett, Allison
Explains how to create job aids that employees will be motivated to use, based on a review of pertinent literature and interviews with professionals. Topics addressed include linking motivation with job aids; Keller's ARCS (Attention, Relevance, Confidence, Satisfaction) model of motivation; and design strategies for job aids based on Keller's…
... A Week of Healthy Breakfasts Shyness HIV and AIDS KidsHealth > For Teens > HIV and AIDS Print A A A What's in this article? ... in human history. HIV causes a condition called acquired immunodeficiency syndrome — better known as AIDS . HIV destroys a type ...
Zhao, Bo; Bradbury, Katharine
This paper designs a new equalization-aid formula based on fiscal gaps of local communities. When states are in transition to a new local aid formula, the issue of whether and how to hold existing aid harmless poses a challenge. The authors show that some previous studies and the formulas derived from them give differential weights to existing and…
CPI's spinoff from miniaturized pace circuitry is the new heart-assist device, the AID implantable automatic pulse generator. AID pulse generator monitors the heart continuously, recognizes onset of fibrillation, then administers a corrective electrical shock. A mini- computer, a power source, and two electrodes which sense heart activity are included in the unit. An associated system was also developed. It includes an external recorder to be worn by AID patients and a physician's console to display the data stored by the recorder. System provides a record of fibrillation occurrences and the ensuing defibrillation.
Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution
Moody, K. W.
Intended primarily for language teachers in underfinanced school districts or underdeveloped countries where educational resources are scarce, this article suggests ways and means of using material resources as instructional realia. The author proposes several principles on which the use of audiovisual materials in the classroom should be based.…
... Subscribe Translate Text Size Print What Is HIV/AIDS? Human Immunodeficiency Virus (HIV) HIV stands for human ... use the HIV Testing & Care Services Locator. GO Acquired Immunodeficiency Syndrome (AIDS) AIDS stands for acquired immunodeficiency syndrome. AIDS ...
... Navigation Bar Home Current Issue Past Issues HIV / AIDS HIV, AIDS, and the Future Past Issues / Summer 2009 Table ... and your loved ones from HIV/AIDS. The AIDS Memorial Quilt In 1987, a total of 1, ...
Grugel, Richard N. (Inventor)
Progress in hearing aids has come a long way. Yet despite such progress hearing aids are not the perfect answer to many hearing problems. Some adult ears cannot accommodate tightly fitting hearing aids. Mouth movements such as chewing, talking, and athletic or other active endeavors also lead to loosely fitting ear molds. It is well accepted that loosely fitting hearing aids are the cause of feedback noise. Since feedback noise is the most common complaint of hearing aid wearers it has been the subject of various patents. Herein a hearing aid assembly is provided eliminating feedback noise. The assembly includes the combination of a hearing aid with a headset developed to constrict feedback noise.
Desantis, Andrea; Haggard, Patrick
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Banister, Richard E.
Two American history courses taught by different multimedia methods were compared. Each course was semi-automated in order to free the instructor's time for question and answer periods. One experimental group of junior college students took the course using FM radio, an illustrated syllabus, and student response sheets. Another group took the same…
Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana
Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we…
Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias
Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…
Gilmore, N. J.; Beaulieu, R.; Steben, M.; Laverdière, M.
Acquired immunodeficiency syndrome, or AIDS, is a new illness that occurs in previously healthy individuals. It is characterized by immunodeficiency, opportunistic infections and unusual malignant diseases. Life-threatening single or multiple infections with viruses, mycobacteria, fungi or protozoa are common. A rare neoplasm, Kaposi's sarcoma, has developed in approximately one third of patients with AIDS. More than 800 cases of AIDS have been reported in North America, over 24 of them in Canada. The majority of patients are male homosexuals, although AIDS has also developed in abusers of intravenously administered drugs, Haitian immigrants, individuals with hemophilia, recipients of blood transfusions, prostitutes, and infants, spouses and partners of patients with AIDS. The cause of AIDS is unknown, but the features are consistent with an infectious process. Early diagnosis can be difficult owing to the nonspecific symptoms and signs of the infections and malignant diseases. Therefore, vigilance by physicians is of utmost importance. PMID:6342737
Gilmore, N.J.; Beaulieu, R.; Steben, M.; Laverdière, M.
Acquired immunodeficiency syndrome, or AIDS, is a new illness that occurs in previously healthy individuals. It is characterized by immunodeficiency, opportunistic infections and unusual malignant diseases. Life-threatening single or multiple infections with viruses, mycobacteria, fungi or protozoa are common. A rare neoplasm, Kaposi's sarcoma, has developed in approximately one third of patients with AIDS. More than 800 cases of AIDS have been reported in North America, over 24 of them in Canada. The majority of patients are male homosexuals, although AIDS has also developed in abusers of intravenously administered drugs, Haitian immigrants, individuals with hemophilia, recipients of blood transfusions, prostitutes, and infants, spouses and partners of patients with AIDS. The cause of AIDS is unknown, but the features are consistent with an infectious process. Early diagnosis can be difficult owing to the nonspecific symptoms and signs of the infections and malignant diseases. Therefore, vigilance by physicians is of the utmost importance. PMID:1544049
The insurance and medical aid industries reacted strongly in the 1980s to alarmist predictions of the likely impact of HIV upon employee benefits. Actuaries and accountants moved quickly to contain the risk, and most medical aid trustees quickly implemented a total exclusion of HIV treatment from their benefits. For more than 1 decade, it was argued that HIV/AIDS is a self-inflicted illness, often categorized with other STDs. In response, healthcare providers simply bypassed insurance restrictions and compensation limits by masking patient diagnoses to reflect pneumonia or other ambiguous, yet fully reimbursable, illnesses. Now, common sense has finally prevailed as a few managed healthcare programs are stepping forward to break the impasse. The largest such program is Aid for AIDS, run by Pharmaceutical Benefit Management Ltd. for schemes within the Medscheme Group. The Group built an entirely new, secure unit off-site from their normal branches to guarantee the confidentiality of patients' records and diagnoses, while treatment guidelines have been issued to every practicing physician in the country.
Ijsselmuiden, C; Evian, C; Matjilla, J; Steinberg, M; Schneider, H
The National AIDS Convention in South Africa (NACOSA) in October 1992 was the first real attempt to address HIV/AIDS. In Soweto, government, the African National Congress, nongovernmental organizations, and organized industry and labor representatives worked for 2 days to develop a national plan of action, but it did not result in a united effort to fight AIDS. The highest HIV infection rates in South Africa are among the KwaZulu in Natal, yet the Inkatha Freedom Party did not attend NACOSA. This episode exemplifies the key obstacles for South Africa to prevent and control AIDS. Inequality of access to health care may explain why health workers did not diagnose the first AIDS case in blacks until 1985. Migrant labor, Bantu education, and uprooted communities affect the epidemiology of HIV infection. Further, political and social polarization between blacks and whites contributes to a mindset that AIDS is limited to the other race which only diminishes the personal and collective sense of susceptibility and the volition and aptitude to act. The Department of National Health and Population Development's voluntary register of anonymously reported cases of AIDS specifies 1517 cumulative AIDS cases (October 1992), but this number is low. Seroprevalence studies show between 400,000-450,000 HIV positive cases. Public hospitals cannot give AIDS patients AZT and DDI. Few communities provided community-based care. Not all hospitals honor confidentiality and patients' need for autonomy. Even though HIV testing is not mandatory, it is required sometimes, e.g., HIV testing of immigrants. AIDS Training, Information and Counselling Centers are in urban areas, but not in poor areas where the need is most acute. The government just recently developed in AIDS education package for schools, but too many people consider it improper, so it is not being used. The poor quality education provided blacks would make it useless anyhow. Lifting of the academic boycott will allow South African
Noguera, José M.; Correyero, Beatriz
After the consolidation of weblogs as interactive narratives and producers, audiovisual formats are gaining ground on the Web. Videos are spreading all over the Internet and establishing themselves as a new medium for political propaganda inside social media with tools so powerful like YouTube. This investigation proceeds in two stages: on one hand we are going to examine how this audiovisual formats have enjoyed an enormous amount of attention in blogs during the Spanish pre-electoral campaign for the elections of March 2008. On the other hand, this article tries to investigate the social impact of this phenomenon using data from a content analysis of the blog discussion related to these videos centered on the most popular Spanish political blogs. Also, we study when the audiovisual political messages (made by politicians or by users) "born" and "die" in the Web and with what kind of rules they do.
Doesburg, Sam M; Emberson, Lauren L; Rahi, Alan; Cameron, David; Ward, Lawrence M
Real-world speech perception relies on both auditory and visual information that fall within the tolerated range of temporal coherence. Subjects were presented with audiovisual recordings of speech that were offset by either 30 or 300 ms, leading to perceptually coherent or incoherent audiovisual speech, respectively. We provide electroencephalographic evidence of a phase-synchronous gamma-oscillatory network that is transiently activated by the perception of audiovisual speech asynchrony, showing both topological and time-course correspondence to networks reported in previous neuroimaging research. This finding addresses a major theoretical hurdle regarding the mechanism by which distributed networks serving a common function achieve transient functional integration. Moreover, this evidence illustrates an important dissociation between phase-synchronization and stimulus coherence, highlighting the functional nature of network-based synchronization.
Madsen, Sara M K; Moore, Brian C J
The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems.
Moore, Brian C. J.
The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems. PMID:25361601
Kirkman, M B; Bell, S K
AIDS has created many challenges for those who provide care for AIDS patients. One major challenge has been the request of many public officials for healthcare professionals to abandon the traditional view of confidentiality and to reveal AIDS patients' names. This ethical dilemma is explored and some ethical theories are presented as possible resolutions. The conclusion presented is that healthcare professionals must recognize that the power of the healthcare system over an AIDS patient is immense. Therefore, healthcare professionals must make a commitment to protect the patient's right to privacy by preventing any unauthorized disclosure at all costs.
Ghazanfar, Asif A; Chandrasekaran, Chandramouli; Morrill, Ryan J
Audiovisual speech has a stereotypical rhythm that is between 2 and 7 Hz, and deviations from this frequency range in either modality reduce intelligibility. Understanding how audiovisual speech evolved requires investigating the origins of this rhythmic structure. One hypothesis is that the rhythm of speech evolved through the modification of some pre-existing cyclical jaw movements in a primate ancestor. We tested this hypothesis by investigating the temporal structure of lipsmacks and teeth-grinds of macaque monkeys and the neural responses to these facial gestures in the superior temporal sulcus (STS), a region implicated in the processing of audiovisual communication signals in both humans and monkeys. We found that both lipsmacks and teeth-grinds have consistent but distinct peak frequencies and that both fall well within the 2-7 Hz range of mouth movements associated with audiovisual speech. Single neurons and local field potentials of the STS of monkeys readily responded to such facial rhythms, but also responded just as robustly to yawns, a nonrhythmic but dynamic facial expression. All expressions elicited enhanced power in the delta (0-3Hz), theta (3-8Hz), alpha (8-14Hz) and gamma (> 60 Hz) frequency ranges, and suppressed power in the beta (20-40Hz) range. Thus, STS is sensitive to, but not selective for, rhythmic facial gestures. Taken together, these data provide support for the idea that that audiovisual speech evolved (at least in part) from the rhythmic facial gestures of an ancestral primate and that the STS was sensitive to and thus 'prepared' for the advent of rhythmic audiovisual communication.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Ghazanfar, Asif A.; Chandrasekaran, Chandramouli; Morrill, Ryan J.
Audiovisual speech has a stereotypical rhythm that is between 2 and 7 Hz, and deviations from this frequency range in either modality reduce intelligibility. Understanding how audiovisual speech evolved requires investigating the origins of this rhythmic structure. One hypothesis is that the rhythm of speech evolved through the modification of some pre-existing cyclical jaw movements in a primate ancestor. We tested this hypothesis by investigating the temporal structure of lipsmacks and teeth-grinds of macaque monkeys and the neural responses to these facial gestures in the superior temporal sulcus (STS), a region implicated in the processing of audiovisual communication signals in both humans and monkeys. We found that both lipsmacks and teeth-grinds have consistent but distinct peak frequencies and that both fall well within the 2–7 Hz range of mouth movements associated with audiovisual speech. Single neurons and local field potentials of the STS of monkeys readily responded to such facial rhythms, but also responded just as robustly to yawns, a nonrhythmic but dynamic facial expression. All expressions elicited enhanced power in the delta (0–3Hz), theta (3–8Hz), alpha (8–14Hz) and gamma (> 60 Hz) frequency ranges, and suppressed power in the beta (20–40Hz) range. Thus, STS is sensitive to, but not selective for, rhythmic facial gestures. Taken together, these data provide support for the idea that that audiovisual speech evolved (at least in part) from the rhythmic facial gestures of an ancestral primate and that the STS was sensitive to and thus ‘prepared’ for the advent of rhythmic audiovisual communication. PMID:20584185
Talsma, Durk; Senkowski, Daniel; Woldorff, Marty G
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125-75 ms, by 75-25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus
Lee, D; Pollock, S; Makhija, K; Keall, P; Greer, P; Arm, J; Hunter, P; Kim, T
Purpose: To investigate whether the breathing-guidance system: audiovisual (AV) biofeedback improves tumor motion consistency for lung cancer patients. This will minimize respiratory-induced tumor motion variations across cancer imaging and radiotherapy procedues. This is the first study to investigate the impact of respiratory guidance on tumor motion. Methods: Tumor motion consistency was investigated with five lung cancer patients (age: 55 to 64), who underwent a training session to get familiarized with AV biofeedback, followed by two MRI sessions across different dates (pre and mid treatment). During the training session in a CT room, two patient specific breathing patterns were obtained before (Breathing-Pattern-1) and after (Breathing-Pattern-2) training with AV biofeedback. In each MRI session, four MRI scans were performed to obtain 2D coronal and sagittal image datasets in free breathing (FB), and with AV biofeedback utilizing Breathing-Pattern-2. Image pixel values of 2D images after the normalization of 2D images per dataset and Gaussian filter per image were used to extract tumor motion using image pixel values. The tumor motion consistency of the superior-inferior (SI) direction was evaluated in terms of an average tumor motion range and period. Results: Audiovisual biofeedback improved tumor motion consistency by 60% (p value = 0.019) from 1.0±0.6 mm (FB) to 0.4±0.4 mm (AV) in SI motion range, and by 86% (p value < 0.001) from 0.7±0.6 s (FB) to 0.1±0.2 s (AV) in period. Conclusion: This study demonstrated that audiovisual biofeedback improves both breathing pattern and tumor motion consistency for lung cancer patients. These results suggest that AV biofeedback has the potential for facilitating reproducible tumor motion towards achieving more accurate medical imaging and radiation therapy procedures.
Barja, Salesa; Muñoz, Carolina; Cancino, Natalia; Núñez, Alicia; Ubilla, Mario; Sylleros, Rodrigo; Riveros, Rodrigo; Rosas, Ricardo
Introduccion. Los niños con enfermedades neurologicas que condicionan una limitacion grave de la movilidad tienen una calidad de vida (CV) deficiente. Objetivo. Estudiar si la CV de dichos pacientes mejora con la aplicacion de un programa de estimulacion audiovisual. Pacientes y metodos. Estudio prospectivo en nueve niños, seis de ellos varones (edad media: 42,6 ± 28,6 meses), con limitacion grave de la movilidad, hospitalizados de manera prolongada. Se elaboraron dos programas de estimulo audiovisual que, junto con videos, se aplicaron mediante una estructura especialmente diseñada. La frecuencia fue de dos veces al dia, por 10 minutos, durante 20 dias. Los primeros diez dias se llevo a cabo de manera pasiva y los segundos diez con guia del observador. Se registraron variables biologicas, conductuales, cognitivas y se aplico una encuesta de CV adaptada. Resultados. Se diagnosticaron tres casos de atrofia muscular espinal, dos de distrofia muscular congenita, dos de miopatia y dos con otros diagnosticos. Ocho pacientes completaron el seguimiento. Desde el punto de vista basal, presentaron CV regular (7,2 ± 1,7 puntos; mediana: 7,0; rango: 6-10), que mejoraba a buena al finalizar (9,4 ± 1,2 puntos; mediana: 9,0; rango: 8-11), con diferencia intraindividual de 2,1 ± 1,6 (mediana: 2,5; rango: –1 a 4; IC 95% = 0,83-3,42; p = 0,006). Se detecto mejoria en cognicion y percepcion favorable de los cuidadores. No hubo cambio en las variables biologicas ni conductuales. Conclusion. Mediante la estimulacion audiovisual es posible mejorar la calidad de vida de niños con limitacion grave de la movilidad.
Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7-8-year-olds, 10-11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether non-verbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2 kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs) - 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition) while in another half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of RT at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10-11-year-olds outperforming 7-8-year-olds at the 300-500 ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function, such as autism, specific language impairment, and dyslexia may be compared. PMID:26569563
Shakula, A V; Emel'ianov, G A
The present study was designed to evaluate the effectiveness of audiovisual stimulation on the state of the eye accommodation system in the patients experiencing eyes train with the concomitant disturbances of psychological. It was shown that a course of audiovisual stimulation (seeing a psychorelaxing film accompanied by a proper music) results in positive (5.9-21.9%) dynamics of the objective accommodation parameters and of the subjective status (4.5-33.2%). Taken together, these findings whole allow this method to be regarded as "relaxing preparation" in the integral complex of the measures for the preservation of the professional vision in this group of the patients.
Rosen, Sydney; Simon, Jonathon; Vincent, Jeffrey R; MacLeod, William; Fox, Matthew; Thea, Donald M
If your company operates in a developing country, AIDS is your business. While Africa has received the most attention, AIDS is also spreading swiftly in other parts of the world. Russia and Ukraine had the fastest-growing epidemics last year, and many experts believe China and India will suffer the next tidal wave of infection. Why should executives be concerned about AIDS? Because it is destroying the twin rationales of globalization strategy-cheap labor and fast-growing markets--in countries where people are heavily affected by the epidemic. Fortunately, investments in programs that prevent infection and provide treatment for employees who have HIV/AIDS are profitable for many businesses--that is, they lead to savings that outweigh the programs' costs. Due to the long latency period between HIV infection and the onset of AIDS symptoms, a company is not likely to see any of the costs of HIV/AIDS until five to ten years after an employee is infected. But executives can calculate the present value of epidemic-related costs by using the discount rate to weigh each cost according to its expected timing. That allows companies to think about expenses on HIV/AIDS prevention and treatment programs as investments rather than merely as costs. The authors found that the annual cost of AIDS to six corporations in South Africa and Botswana ranged from 0.4% to 5.9% of the wage bill. All six companies would have earned positive returns on their investments if they had provided employees with free treatment for HIV/AIDS in the form of highly active antiretroviral therapy (HAART), according to the mathematical model the authors used. The annual reduction in the AIDS "tax" would have been as much as 40.4%. The authors' conclusion? Fighting AIDS not only helps those infected; it also makes good business sense.
He, Pengbo; Li, Qiang; Zhao, Ting; Liu, Xinguo; Dai, Zhongying; Ma, Yuanyuan
A synchrotron-based heavy-ion accelerator operates in pulse mode at a low repetition rate that is comparable to a patient’s breathing rate. To overcome inefficiencies and interplay effects between the residual motion of the target and the scanned heavy-ion beam delivery process for conventional free breathing (FB)-based gating therapy, a novel respiratory guidance method was developed to help patients synchronize their breathing patterns with the synchrotron excitation patterns by performing short breath holds with the aid of personalized audio-visual biofeedback (BFB) system. The purpose of this study was to evaluate the treatment precision, efficiency and reproducibility of the respiratory guidance method in scanned heavy-ion beam delivery mode. Using 96 breathing traces from eight healthy volunteers who were asked to breathe freely and guided to perform short breath holds with the aid of BFB, a series of dedicated four-dimensional dose calculations (4DDC) were performed on a geometric model which was developed assuming a linear relationship between external surrogate and internal tumor motions. The outcome of the 4DDCs was quantified in terms of the treatment time, dose-volume histograms (DVH) and dose homogeneity index. Our results show that with the respiratory guidance method the treatment efficiency increased by a factor of 2.23-3.94 compared with FB gating, depending on the duty cycle settings. The magnitude of dose inhomogeneity for the respiratory guidance methods was 7.5 times less than that of the non-gated irradiation, and good reproducibility of breathing guidance among different fractions was achieved. Thus, our study indicates that the respiratory guidance method not only improved the overall treatment efficiency of respiratory-gated scanned heavy-ion beam delivery, but also had the advantages of lower dose uncertainty and better reproducibility among fractions.
Center for Population Options, Washington, DC.
The three fact sheets presented in this document address issues surrounding adolescent sexuality and sexually transmitted diseases (STDs), especially the Acquired Immune Deficiency Syndrome (AIDS). The first fact sheet, "Young Women and AIDS: A Worldwide Perspective," suggests that since open discussions of adolescent sexuality have long been…
Describes the varied kinds of student aid fraud found to be occurring within and outside colleges and universities, and examines implications for public policy on student aid programs. Discusses specific fraud cases and their outcomes, and makes suggestions for institutional action if student fraud is suspected. (MSE)
Describes the structure and modes of operation of the Bundessprachenamt's (BSprA: Federal Office of Languages of the Federal Republic of Germany) terminology data bank as an aid to translation. Analyzes advantages and disadvantages of each user mode, and discusses probable developments in the immediate future of machine-aided translation. (MES)
Recent evidence highlights several worrisome trends regarding aid pledges and disbursements, which have been exacerbated by the global financial crisis. First, while overall development assistance rose in 2008, after 2 years of decline, the share of all sector aid going to the education sector has remained virtually unchanged at about 12 percent…
Scholarly interest in Acquired Immune Deficiency Syndrome (AIDS) has spread throughout the humanities, attracting the attention of historians of medicine, political scientists, sociologists, public health scholars, and anthropologists. Most theorists hope their research will aid in policymaking or change understanding of the epidemic. (MSE)
... Your 1- to 2-Year-Old First Aid: Diaper Rash KidsHealth > For Parents > First Aid: Diaper Rash A A A Diaper rash is a common skin condition in babies. ... rash is due to irritation caused by the diaper, but it can have other causes not related ...
Iowa Univ., Iowa City. Coll. of Education.
This report presents results of a project to revise the current 120-hour advanced nurse aide course to include all recommended minimum competencies. A three-page description of project objectives, activities, and outcomes is followed by a list of the competencies for the 75-hour nurse aide course for long-term care and for the 120-hour advanced…
House, Reese M.; Walker, Catherine M.
Compares the Acquired Immune Deficiency Syndrome (AIDS) epidemic to past epidemics, including social and political responses. Identifies populations at risk for human immunodeficiency virus (HIV) infection. Discusses current social and economic factors affecting AIDS education programs. Makes recommendations and identifies resources for starting…
Poehlmann, Karl Horst
AIDS is now said to threaten humanity as a modern-day scourge. However, there is a group of scientists which maintains that this scare is unwarranted. Some arguments in favour of the re-evaluation of the AIDS hypothesis are presented in this article. PMID:22556763
Lejeune, Genevieve, Ed.
This journal issue is devoted to the many problems faced by children with Acquired Immune Deficiency Syndrome (AIDS) who live in both developing and developed countries. Section 1 provides general information on the pandemic, defining AIDS and exploring the social aspects of the disease. It also addresses child health, child mortality, moral and…
Klees, Steven J.
The world faces pervasive poverty and inequality. Hundreds of billions of dollars in international aid have been given or loaned to developing countries though bilateral and multilateral mechanisms, at least, ostensibly, in order to do something about these problems. Has such aid helped? Debates around this question have been ongoing for decades,…
Pohl, Melvin I.
After defining HIV and the AIDS disease and outlining symptoms and means of infection, this fact sheet lists the ways alcohol and drugs are involved with the AIDS epidemic, noting that needle-sharing transmits the virus; that alcohol or mood-altering drugs like crack cocaine cause disinhibition, increase sex drive, encourage sex for drugs, and…
... Your 1- to 2-Year-Old First Aid: Falls KidsHealth > For Parents > First Aid: Falls Print A A A en español Folleto de instructiones: Caídas (Falls) With all the running, climbing, and exploring kids ...
CPI's human-implantable automatic implantable defibrillator (AID) is a heart assist system, derived from NASA's space circuitry technology, that can prevent erratic heart action known as arrhythmias. Implanted AID, consisting of microcomputer power source and two electrodes for sensing heart activity, recognizes onset of ventricular fibrillation (VF) and delivers corrective electrical countershock to restore rhythmic heartbeat.
0031 dis~bti:,1 is uitsnjt( Deczmllcr 31: 1989 92-05530 2:.-: 3o : T >VE?-A ~ : Inertially Aided Robotics FINAL REPORT for Contract No. DAAHO1 -88-D-0057...1 2 Advantages of Inertially Aided Robotics ...86 iii List of Figures Figure 1 - Robot Manipulator having Joint Sensor Based Control ..................... 2
Huddleston, Thomas, Jr.; Batty, Burt F.
Student financial assistance services are becoming a major part of the institutional marketing plan as traditional college-age students decline in numbers and price competition among institutions increases. The effect of financial aid on enrollment and admissions processes is discussed along with the role of the financial aid officer. (Author/LBH)
Chambliss, Catherine; And Others
Since assuring quality health care delivery to patients suffering from Acquired Immunodeficiency Syndrome (AIDS) and those who test positive for Human Immunodeficiency Virus (HIV) is a priority, development of effective staff training methods is imperative. This pilot study assessed the effect on staff attitudes of a participative AIDS/HIV staff…
Describes the DACUM (Developing A CurriculUM) process and how it is used at Universal Technical Institute to determine what types of training aids to produce. Indicates that examining the employment needs of industry and educational needs of students enhances programs and promotes development of innovative aids. (JOW)
... Your 1- to 2-Year-Old First Aid: Burns KidsHealth > For Parents > First Aid: Burns A A A Scald burns from hot water and other liquids are the most common burns in early childhood. Because burns range from mild ...
Merit aid, a discount to college costs contingent upon academic performance, is nothing new. Colleges and private organizations have long rewarded high-achieving, college-bound high school students with scholarships. While merit aid has a long history in the private sector, it has not played a major role in the public sector. At the state level,…
Rahmani, Fouad Lazhar
The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM .
Huminer, D; Rosenfeld, J B; Pitlik, S D
A search of the medical literature published since 1950 disclosed 19 cases of probable AIDS reported before the start of the current epidemic. These cases retrospectively met the Centers for Disease Control's surveillance definition of the syndrome and had a clinical course suggestive of AIDS. The reports originated from North America, Western Europe, Africa, and the Middle East. The mean age of patients was 37 years, and the ratio of male to female patients was 1.7:1. Sixteen patients had opportunistic infections(s) without Kaposi's sarcoma. The remainder had disseminated Kaposi's sarcoma. The commonest opportunistic infection was Pneumocystis carinii pneumonia. Two patients were reported to be homosexual. Three others had been living in Africa, and one patient was born in Haiti. In two instances concurrent or subsequent opportunistic infection occurred in family members. All patients died 1 month to 6 years after the initial manifestation of disease. In view of the historical data, unrecognized cases of AIDS appear to have occurred sporadically in the pre-AIDS era.
Miller, R D
As part of a national effort to improve efficiency in court procedures, the American Bar Association has recommended, on the basis of a number of pilot studies, increased use of current audio-visual technology, such as telephone and live video communication, to eliminate delays caused by unavailability of participants in both civil and criminal procedures. Although these recommendations were made to facilitate court proceedings, and for the convenience of attorneys and judges, they also have the potential to save significant time for clinical expert witnesses as well. The author reviews the studies of telephone testimony that were done by the American Bar Association and other legal research groups, as well as the experience in one state forensic evaluation and treatment center. He also reviewed the case law on the issue of remote testimony. He then presents data from a national survey of state attorneys general concerning the admissibility of testimony via audio-visual means, including video depositions. Finally, he concludes that the option to testify by telephone provides a significant savings in precious clinical time for forensic clinicians in public facilities, and urges that such clinicians work actively to convince courts and/or legislatures in states that do not permit such testimony (currently the majority), to consider accepting it, to improve the effective use of scarce clinical resources in public facilities.
Nardini, Marko; Bales, Jennifer; Mareschal, Denis
In adults, decisions based on multisensory information can be faster and/or more accurate than those relying on a single sense. However, this finding varies significantly across development. Here we studied speeded responding to audio-visual targets, a key multisensory function whose development remains unclear. We found that when judging the locations of targets, children aged 4 to 12 years and adults had faster and less variable response times given auditory and visual information together compared with either alone. Comparison of response time distributions with model predictions indicated that children at all ages were integrating (pooling) sensory information to make decisions but that both the overall speed and the efficiency of sensory integration improved with age. The evidence for pooling comes from comparison with the predictions of Miller's seminal 'race model', as well as with a major recent extension of this model and a comparable 'pooling' (coactivation) model. The findings and analyses can reconcile results from previous audio-visual studies, in which infants showed speed gains exceeding race model predictions in a spatial orienting task (Neil et al., 2006) but children below 7 years did not in speeded reaction time tasks (e.g. Barutchu et al., 2009). Our results provide new evidence for early and sustained abilities to integrate visual and auditory signals for spatial localization from a young age.
Niederhuber, M.; Brugger, S.
A new audio-visual learning medium has been developed at the Department of Environmental Sciences at ETH Zurich (Switzerland), for use in geographical information sciences (GIS) courses. This new medium, presented in the form of Repetition Units, allows students to review and consolidate the most important learning concepts on an individual basis. The new material consists of: a) a short enhanced podcast (recorded and spoken slide show) with a maximum duration of 5 minutes, which focuses on only one important aspect of a lecture's theme; b) one or two relevant exercises, covering different cognitive levels of learning, with a maximum duration of 10 minutes; and c), solutions for the exercises. During a pilot phase in 2010, six Repetition Units were produced by the lecturers. Twenty more Repetition Units will be produced by our students during the fall semester of 2011 and 2012. The project is accompanied by a 5-year study (2009 - 2013) that investigates learning success using the new material, focussing on the question, whether or not the new material help to consolidate and refresh basic GIS knowledge. It will be analysed based on longitudinal studies. Initial results indicate that the new medium helps to refresh knowledge as the test groups scored higher than the control group. These results are encouraging and suggest that the new material with its combination of short audio-visual podcasts and relevant exercises help to consolidate students' knowledge.
Xie, Zilong; Yi, Han-Gyol; Chandrasekaran, Bharath
Nonnative speech poses a challenge to speech perception, especially in challenging listening environments. Audiovisual (AV) cues are known to improve native speech perception in noise. The extent to which AV cues benefit nonnative speech perception in noise, however, is much less well-understood. Here, we examined native American English-speaking and native Korean-speaking listeners' perception of English sentences produced by a native American English speaker and a native Korean speaker across a range of signal-to-noise ratios (SNRs;-4 to -20 dB) in audio-only and audiovisual conditions. We employed psychometric function analyses to characterize the pattern of AV benefit across SNRs. For native English speech, the largest AV benefit occurred at intermediate SNR (i.e. -12 dB); but for nonnative English speech, the largest AV benefit occurred at a higher SNR (-4 dB). The psychometric function analyses demonstrated that the AV benefit patterns were different between native and nonnative English speech. The nativeness of the listener exerted negligible effects on the AV benefit across SNRs. However, the nonnative listeners' ability to gain AV benefit in native English speech was related to their proficiency in English. These findings suggest that the native language background of both the speaker and listener clearly modulate the optimal use of AV cues in speech recognition.
Galbrun, Laurent; Calarco, Francesca M A
This paper examines the audio-visual interaction and perception of water features used over road traffic noise, including their semantic aural properties, as well as their categorization and evocation properties. The research focused on a wide range of small to medium sized water features that can be used in gardens and parks to promote peacefulness and relaxation. Paired comparisons highlighted the inter-dependence between uni-modal (audio-only or visual-only) and bi-modal (audio-visual) perception, indicating that equal attention should be given to the design of both stimuli. In general, natural looking features tended to increase preference scores (compared to audio-only paired comparison scores), while manmade looking features decreased them. Semantic descriptors showed significant correlations with preferences and were found to be more reliable design criteria than physical parameters. A principal component analysis identified three components within the nine semantic attributes tested: "emotional assessment," "sound quality," and "envelopment and temporal variation." The first two showed significant correlations with audio-only preferences, "emotional assessment" being the most important predictor of preferences, and its attributes naturalness, relaxation, and freshness also being significantly correlated with preferences. Categorization results indicated that natural stream sounds are easily identifiable (unlike waterfalls and fountains), while evocation results showed no unique relationship with preferences.
Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong
Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes.
Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru
A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).
Colomer Granero, Adrián; Fuentes-Hurtado, Félix; Naranjo Ornedo, Valery; Guixeres Provinciale, Jaime; Ausín, Jose M.; Alcañiz Raya, Mariano
This work focuses on finding the most discriminatory or representative features that allow to classify commercials according to negative, neutral and positive effectiveness based on the Ace Score index. For this purpose, an experiment involving forty-seven participants was carried out. In this experiment electroencephalography (EEG), electrocardiography (ECG), Galvanic Skin Response (GSR) and respiration data were acquired while subjects were watching a 30-min audiovisual content. This content was composed by a submarine documentary and nine commercials (one of them the ad under evaluation). After the signal pre-processing, four sets of features were extracted from the physiological signals using different state-of-the-art metrics. These features computed in time and frequency domains are the inputs to several basic and advanced classifiers. An average of 89.76% of the instances was correctly classified according to the Ace Score index. The best results were obtained by a classifier consisting of a combination between AdaBoost and Random Forest with automatic selection of features. The selected features were those extracted from GSR and HRV signals. These results are promising in the audiovisual content evaluation field by means of physiological signal processing. PMID:27471462
Golovin, M S; Golovin, M S; Aizman, R I
We studied the effect of audiovisual stimulation training course on physical development, functional state of the cardiovascular system, blood biochemical parameters, and hormonal status of athletes. The training course led to improvement of physical performance and adaptive capacities of the circulatory system, increase in plasma levels of total protein, albumin, and glucose and total antioxidant activity, and decrease in triglyceride, lipase, total bilirubin, calcium, and phosphorus. The concentration of hormones (cortisol, thyrotropin, triiodothyronine, and thyroxine) also decreased under these conditions. In the control group, an increase in the concentration of creatinine and uric acid and a tendency toward elevation of lowdensity lipoproteins and total antioxidant activity were observed in the absence of changes in cardiac function and physical performance; calcium and phosphorus concentrations reduced. The improvement in functional state in athletes was mainly associated with intensification of anabolic processes and suppression of catabolic reactions after audiovisual stimulation (in comparison with the control). Stimulation was followed by an increase in the number of correlations between biochemical and hormonal changes and physical performance of athletes, which attested to better integration of processes at the intersystem level.
Watson, Rebecca; Latinus, Marianne; Charest, Ian; Crabbe, Frances; Belin, Pascal
The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and face–voice integration. However, the nature of the STS preference for these ‘social stimuli’ remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., ‘people-selective’ regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a ‘people-selective, heteromodal’ region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a ‘social-information processing’ centre. PMID:23988132
Hadjidimitriou, S; Zacharakis, A; Doulgeris, P; Panoulas, K; Hadjileontiadis, L; Panas, S
Sensorimotor activity in response to motion reflecting audiovisual titillation is studied in this article. EEG recordings, and especially the Mu-rhythm over the sensorimotor cortex (C3, CZ, and C4 electrodes), were acquired and explored. An experiment was designed to provide auditory (Modest Mussorgsky's "Promenade" theme) and visual (synchronized human figure walking) stimuli to advanced music students (AMS) and non-musicians (NM) as a control subject group. EEG signals were analyzed using fractal dimension (FD) estimation (Higuchi's, Katz's and Petrosian's algorithms) and statistical methods. Experimental results from the midline electrode (CZ) based on the Higuchi method showed significant differences between the AMS and the NM groups, with the former displaying substantial sensorimotor response during auditory stimulation and stronger correlation with the acoustic stimulus than the latter. This observation was linked to mirror neuron system activity, a neurological mechanism that allows trained musicians to detect action-related meanings underlying the structural patterns in musical excerpts. Contrarily, the response of AMS and NM converged during audiovisual stimulation due to the dominant presence of human-like motion in the visual stimulus. These findings shed light upon music perception aspects, exhibiting the potential of FD to respond to different states of cortical activity.
Rüsseler, Jascha; Ye, Zheng; Gerth, Ivonne; Szycik, Gregor R; Münte, Thomas F
Developmental dyslexia is a specific deficit in reading and spelling that often persists into adulthood. In the present study, we used slow event-related fMRI and independent component analysis to identify brain networks involved in perception of audio-visual speech in a group of adult readers with dyslexia (RD) and a group of fluent readers (FR). Participants saw a video of a female speaker saying a disyllabic word. In the congruent condition, audio and video input were identical whereas in the incongruent condition, the two inputs differed. Participants had to respond to occasionally occurring animal names. The independent components analysis (ICA) identified several components that were differently modulated in FR and RD. Two of these components including fusiform gyrus and occipital gyrus showed less activation in RD compared to FR possibly indicating a deficit to extract face information that is needed to integrate auditory and visual information in natural speech perception. A further component centered on the superior temporal sulcus (STS) also exhibited less activation in RD compared to FR. This finding is corroborated in the univariate analysis that shows less activation in STS for RD compared to FR. These findings suggest a general impairment in recruitment of audiovisual processing areas in dyslexia during the perception of natural speech.
Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.
Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350
Nonnative speech poses a challenge to speech perception, especially in challenging listening environments. Audiovisual (AV) cues are known to improve native speech perception in noise. The extent to which AV cues benefit nonnative speech perception in noise, however, is much less well-understood. Here, we examined native American English-speaking and native Korean-speaking listeners' perception of English sentences produced by a native American English speaker and a native Korean speaker across a range of signal-to-noise ratios (SNRs;−4 to −20 dB) in audio-only and audiovisual conditions. We employed psychometric function analyses to characterize the pattern of AV benefit across SNRs. For native English speech, the largest AV benefit occurred at intermediate SNR (i.e. −12 dB); but for nonnative English speech, the largest AV benefit occurred at a higher SNR (−4 dB). The psychometric function analyses demonstrated that the AV benefit patterns were different between native and nonnative English speech. The nativeness of the listener exerted negligible effects on the AV benefit across SNRs. However, the nonnative listeners' ability to gain AV benefit in native English speech was related to their proficiency in English. These findings suggest that the native language background of both the speaker and listener clearly modulate the optimal use of AV cues in speech recognition. PMID:25474650
Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko
Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.
Vatakis, Argiro; Spence, Charles
This study investigated people's sensitivity to audiovisual asynchrony in briefly-presented speech and musical videos. A series of speech (letters and syllables) and guitar and piano music (single and double notes) video clips were presented randomly at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. The accuracy of participants' TOJ performance (measured in terms of the just noticeable difference; JND) was significantly better for the speech than for either the guitar or piano music video clips, suggesting that people are more sensitive to asynchrony for speech than for music stimuli. The visual stream had to lead the auditory stream for the point of subjective simultaneity (PSS) to be achieved in the piano music clips while auditory leads were typically required for the guitar music clips. The PSS values obtained for the speech stimuli varied substantially as a function of the particular speech sound presented. These results provide the first empirical evidence regarding people's sensitivity to audiovisual asynchrony for musical stimuli. Our results also demonstrate that people's sensitivity to asynchrony in speech stimuli is better than has been suggested on the basis of previous research using continuous speech streams as stimuli.
Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild
Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.
Dov, David; Talmon, Ronen; Cohen, Israel
In this paper, we address the problem of multiple view data fusion in the presence of noise and interferences. Recent studies have approached this problem using kernel methods, by relying particularly on a product of kernels constructed separately for each view. From a graph theory point of view, we analyze this fusion approach in a discrete setting. More specifically, based on a statistical model for the connectivity between data points, we propose an algorithm for the selection of the kernel bandwidth, a parameter, which, as we show, has important implications on the robustness of this fusion approach to interferences. Then, we consider the fusion of audio-visual speech signals measured by a single microphone and by a video camera pointed to the face of the speaker. Specifically, we address the task of voice activity detection, i.e., the detection of speech and non-speech segments, in the presence of structured interferences such as keyboard taps and office noise. We propose an algorithm for voice activity detection based on the audio-visual signal. Simulation results show that the proposed algorithm outperforms competing fusion and voice activity detection approaches. In addition, we demonstrate that a proper selection of the kernel bandwidth indeed leads to improved performance.
Vatakis, Argiro; Spence, Charles
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.
Since it tends to be significantly affected by HIV/AIDS, the tourism sector is a likely target for HIV/AIDS interventions in many countries. The tourist industry is at particular risk from the pandemic because of the mobility of the work force, the presence of sex tourists, and the heavy reliance of many countries upon tourism revenues. Indeed, tourism is one of the largest and fastest growing industries in many countries. Some people have speculated that potential tourists' fear of AIDS could discourage them from visiting certain countries, while others have even suggested that tourism should be discouraged because the industry contributes to the spread of HIV/AIDS. When traveling, tourists often take risks that they would not take at home. They tend to drink more, use drugs more, and be generally more adventurous while on holiday. Such adventures often include taking sexual risks. When tourists have sex with prostitutes, hotel staff, and others in the local population, a bridge can be created for HIV to cross back and forth between the tourist's home country and the tourist destination. The author reviews selected studies on the relationship between HIV/AIDS and tourism. Overall, the existing literature offers no definitive evidence that AIDS has had any lasting impact upon the tourism industry anywhere in the world. Rather, promoting a healthy tourism industry and HIV/AIDS prevention are likely complementary in many ways.
Elias, A M
In April 1991, the Ethnic Communities' Council of NSW was granted funding under the Community AIDS Prevention and Education Program through the Department of Community Services and Health, to produce a series of 6x50 second AIDS radio triggers with a 10-second tag line for further information. The triggers are designed to disseminate culturally-sensitive information about HIV/AIDS in English, Italian, Greek, Spanish, Khmer, Turkish, Macedonian, Serbo-Croatian, Arabic, Cantonese, and Vietnamese, with the goal of increasing awareness and decreasing the degree of misinformation about HIV/AIDS among people of non-English-speaking backgrounds through radio and sound. The 6 triggers cover the denial that AIDS exists in the community, beliefs that words and feelings do not protect one from catching HIV, encouraging friends to be compassionate, compassion within the family, AIDS information for a young audience, and the provision of accurate and honest information on HIV/AIDS. The triggers are slated to be completed by the end of July 1991 and will be broadcast on all possible community, ethnic, and commercial radio networks across Australia. They will be available upon request in composite form with an information kit for use by health care professionals and community workers.
Chasin, Marshall; Russo, Frank A
Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.
Perhaps more than any other disease in recent history, AIDS has taught a cruel and crucial lesson: the constraints on our response to this epidemic are as deep as our denial, as entrenched as the inequities that permeate our society, as circumscribed as our knowledge, and as unlimited as our compassion and our commitment to human rights. Elaborating on these themes, the final three articles in this Special Section on AIDS consider three widely divergent yet intimately connected topics: AIDS in Cuba, AIDS in Brazil, and global AIDS prevention in the 1990s. Together, they caution that if we persist in treating AIDS as a problem only of "others," no country will be spared the social and economic devastation that promises to be the cost of our contempt and our folly. Solidarity is not an option; it is a necessity. Without conscious recognition of the worldwide relationship between health, human rights, and social inequalities, our attempts to abate the spread of AIDS--and to ease the suffering that follows in its wake--most surely will fall short of our goals. Finally, as we mourn our dead, we must take to heart the words of Mother Jones, and "fight like hell for living." This is the politics of survival.
Chasin, Marshall; Russo, Frank A.
Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions. PMID:15497032
... AIDS email updates Enter email Submit HIV and AIDS The human immunodeficiency (IH-myoo-noh-di-FISH- ... health Pregnancy and HIV View more HIV and AIDS resources Related information Birth control methods Sexually transmitted ...
... Navigation Bar Home Current Issue Past Issues HIV / AIDS HIV / AIDS: An Unequal Burden Past Issues / Summer 2009 Table ... Victoria Cargill talks to students about HIV and AIDS at the opening of a National Library of ...
... Psychiatric Disorders Other Substance Abuse HIV/AIDS HIV/AIDS Human immunodeficiency virus (HIV) targets the body’s immune ... and often leads to acquired immune deficiency syndrome (AIDS). Each year in the United States, between 55, ...
... Reports » HIV/AIDS » Letter from the Director HIV/AIDS Email Facebook Twitter Letter from the Director Human ... the virus that causes acquired immune deficiency syndrome (AIDS) — has been with us for three decades now. ...
... NIAID). /* // ** // */ Prevention Research Vaccines Microbicides Related Topics on AIDS.gov Clinical Trials Immune System 101 HIV Vaccine ... Be the Generation Last revised: 12/09/2016 AIDS.gov HIV/AIDS Basics • Federal Resources • Using New ...
World political aspects and the example of Australia as a national political response to AIDS are presented. Global policy on AIDS is influenced by the fact that the AIDS epidemic is the 1st to be largely predictable, that long lag times occur between intervention and measurable events, and by the prompt, professional leadership of WHO, lead by Dr. J. Mann. WHO began a Global Programme on AIDS in 1987, modelled on the responses of Canada and Australia. A world summit of Ministers of Health was convened in January 1988. These moves generated a response qualified by openness, cooperation, hope and common sense. The AIDS epidemic calls for unprecedented involvement of politicians: they must coordinate medical knowledge with community action, deal with public fear, exert strong, rational leadership and avoid quick, appealing counterproductive responses. 3 clear directions must be taken to deal with the epidemic: 1) strong research and education campaigns; 2) close contact with political colleagues, interest groups and the community; 3) a national strategy which enjoins diverse interest groups, with courage, rationality and compassion. In Australia, the AIDS response began with the unwitting infection of 3 infants by blood transfusion. A public information campaign emphasizing a penetrating TV ad campaign was instituted in 1987. Policy discussions were held in all parliamentary bodies. The AIDS epidemic demands rapid, creative responses, a break from traditions in health bureaucracy, continual scrutiny of funding procedures and administrative arrangements. In practical terms in Australia, this meant establishing a special AIDS branch within the Health Advancement Division of the Community Health Department. AIDS issues must remain depoliticized to defuse adversary politics and keep leaders in a united front.
Rozga, Agata; King, Tricia Z.; Vuduc, Richard W.; Robins, Diana L.
We examined facial electromyography (fEMG) activity to dynamic, audio-visual emotional displays in individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. Participants viewed clips of happy, angry, and fearful displays that contained both facial expression and affective prosody while surface electrodes measured…
This article attempts to offer an overview of the current changes that are being experienced in the management of audio-visual documentation and those that can be forecast in the future as a result of the migration from analogue to digital information. For this purpose the documentary chain will be used as a basis to analyse individually the tasks…
National Archives and Records Service (GSA), Washington, DC. National Audiovisual Center.
The films and filmstrips listed in this catalog are Federal records, since they document the functions and operations of Federal agencies. This is the first edition of the sales catalog for the National Audiovisual Center. It contains films categorized under 18 broad headings: agriculture, automotive, aviation, business, education and culture,…
National Education Association, Washington, DC.
THIS IS A SERIES OF WORKING PAPERS AIMED AT AUDIOVISUAL SPECIALISTS. THE KEYNOTE ADDRESS, COMMITTEE REPORTS, AND CONFERENCE SUMMARY CONCERN LEARNING SPACE AND EDUCATIONAL MEDIA IN INSTRUCTIONAL PROGRAMS. REPORTS DEAL WITH A BEHAVIORAL ANALYSIS APPROACH TO CURRICULUM AND SPACE CONSIDERATIONS, SOURCES OF INFORMATION AND RESEARCH ON LEARNING SPACE,…
Wilbiks, Jonathan M P; Dyson, Benjamin J
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.
Wilbiks, Jonathan M. P.; Dyson, Benjamin J.
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790
Wood, Milton E.; Gerlach, Vernon S.
A technique was developed for providing transfer-of-training from a form of audiovisual pretraining to an instrument flight task. The continuous flight task was broken into discrete categories of flight; each category combined an instrument configuration with a return-to-criterion aircraft control response. Three methods of sequencing categories…
The material presented here is the result of a review of the Technical Development Plan of the National Library of Medicine, made with the object of describing the role of audiovisual materials in medical education, research and service, and particularly in the continuing education of physicians and allied health personnel. A historical background…
Vazquez-Cano, Esteban; Fombona, Javier; Fernandez, Alberto
This article analyzes a system of virtual attendance, called "AVIP" (AudioVisual over Internet Protocol), at the Spanish Open University (UNED) in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the…
Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás
Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes.
... follows: (a) USIA audiovisual records prepared for dissemination abroad that NARA determines may have... dissemination abroad that NARA determines may have copyright protection or may contain copyrighted material are provided to you if you seek the release of such materials in the United States once NARA has: (1)...
This article realized the Russian way of theological media education literacy and hermeneutic analysis of specific examples of Soviet anti-religious audiovisual media texts: a study of the process of interpretation of these media texts, cultural and historical factors influencing the views of the media agency/authors. The hermeneutic analysis…
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset.
Shockey, Carolyn, Ed.
A catalog of audio and visual meterials for teaching courses on or illustrating all aspects of audiovisual instruction was developed with a broad coverage of those areas or interests pertinent to the field of instructional communications. The listings should be of value to the college instructor in the area of instructional materials, as well as…
Flom, Ross; Bahrick, Lorraine E.
This research examined the effects of bimodal audiovisual and unimodal visual stimulation on infants' memory for the visual orientation of a moving toy hammer following a 5-min, 2-week, or 1-month retention interval. According to the intersensory redundancy hypothesis (L. E. Bahrick & R. Lickliter, 2000; L. E. Bahrick, R. Lickliter, & R. Flom,…
Medrad utilized NASA's Apollo technology to develop a new device called the AID implantable automatic pulse generator which monitors the heart continuously, recognizes the onset of ventricular fibrillation and delivers a corrective electrical shock. AID pulse generator is, in effect, a miniaturized version of the defibrillator used by emergency squads and hospitals to restore rhythmic heartbeat after fibrillation, but has the unique advantage of being permanently available to the patient at risk. Once implanted, it needs no specially trained personnel or additional equipment. AID system consists of a microcomputer, a power source and two electrodes which sense heart activity.
Cohen, B.A.; Pomeranz, S.; Rabinowitz, J.G.; Rosen, M.J.; Train, J.S.; Norton, K.I.; Mendelson, D.S.
Fifty-two patients with pulmonary complications of acquired immunodeficiency syndrome (AIDS) were studied over a 3-year period. The vast majority of the patients were homosexual; however, a significant number were intravenous drug abusers. Thirteen different organisms were noted, of which Pneumocystis carinii was by far the most common. Five patients had neoplasia. Most patients had initial abnormal chest films; however, eight patients subsequently shown to have Pneumocystis carinii pneumonia had normal chest films. A significant overlap in chest radiographic findings was noted among patients with different or multiple organisms. Lung biopsy should be an early consideration for all patients with a clinical history consistent with the pulmonary complications of AIDS. Of the 52 patients, 41 had died by the time this report was completed.
... First-Aid Kit Food Safety for Your Family Gun Safety Halloween Candy Hints Household Safety Checklists Household ... Climbing, and Grabbing Household Safety: Preventing Injuries From Firearms Household Safety: Preventing Injuries in the Crib Household ...
... known as AIDS . HIV destroys a type of defense cell in the body called a CD4 helper ... are part of the body's immune system , the defense system that fights infections. When HIV destroys these ...
Issues in Science and Technology, 1987
Contains excerpts from a special study on the AIDS epidemic by the Institute of Medicine and National Academy of Sciences. Presents an overview of the problem, outlines educational needs and public health measures, and identifies future research needs. (ML)
... this page: //medlineplus.gov/ency/presentations/100212.htm Convulsions - first aid - series—Procedure, part 1 To use ... slide 2 out of 2 Overview When a seizure occurs, the main goal is to protect the ...
... MORE ON THIS TOPIC Kitchen: Household Safety Checklist Fireworks Safety First Aid: Sunburn Firesetting Fire Safety Burns ... Being Safe in the Kitchen Finding Out About Fireworks Safety Playing With Fire? Dealing With Burns Fireworks ...
Hearing aids often develop malfunctions that are not detected by the wearer. This is particularly true when the wearers are school-age children. Studies of selected groups showed that from 30 to more than 50 percent of school children were not getting adequate benefit from their hearing aids because of unrecognized malfunctions, usually low or dead batteries. This can be serious because hearing impairment retards a child's educational progress. NASA technology incorporated in the Hearing Aid Malfunction Detection Unit (HAMDU), the device pictured, is expected to provide an effective countermeasure to the childrens' hearing aid problem. A patent license has been awarded to a minority-owned firm, Hopkins International Company, a subsidiary of H. H. Aerospace Design Co., Inc., Elmford, New York. The company plans early commercial availability of its version of the device.
... Development Infections Diseases & Conditions Pregnancy & Baby Nutrition & Fitness Emotions & Behavior School & Family Life First Aid & Safety Doctors & ... with each breath has a pale or bluish color around the mouth drools or has difficulty swallowing ...
Explores the conceptual components of a computer program designed to enhance creative thinking and reviews software that aims to stimulate creative thinking. Discusses BRAIN and ORACLE, programs intended to aid in creative problem solving. (JOW)
... HIV/AIDS Influenza Malaria Respiratory Syncytial Virus (RSV) Tuberculosis Zika Virus Find a Funding Opportunity Opportunities & Announcements ... related co-infections, such as hepatitis, malaria, and tuberculosis. Treatment of HIV Infection In the early 1980s ...
... difficulty swallowing becomes tired easily Think Prevention! Frequent hand washing and avoiding contact with people who have respiratory ... Aid: Coughing X-Ray Exam: Neck Why Is Hand Washing So Important? Coughing Croup Contact Us Print Resources ...
... that causes the disease AIDS. HIV Hurts the Immune System People who are HIV positive have been tested ... to everyone in the world. When the person's immune system has weakened and more of the blood's T ...
Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo
The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.
have recommended that Africans infected with HIV be treated with an antibiotic/ sulfa drug combination known as cotrimoxazole in order to prevent...response is the subject of much debate. An estimated 500,000 Africa AIDS patients were being treated with antiretroviral drugs in mid-2005, up from 150,000...whether drugs can be made widely accessible without costly health infrastructure improvements. U.S. concern over AIDS in Africa grew in the 1980s, as the
Strike Planning Aid ( ESPA ) . .V-14 5.4. Tactical Environmental Ship Routing (TESR) V-24 5.5. Chaff Prediction and Planning System (CHAPPS).. V-29...chapter four TDAS from TESS: NAVSAR, acAS program for search and rescue (SARjat sea; ESPA , the Environmental Strike Planning Aid; TESR, the Tactical...STATISTICS CURRENT LOCATION AND CHARACTERISTICS SATELLITE DATA CONVERSION CONSTANTS In 5.1, we give a brief history of TESS. The TDAS NAVSAR, ESPA
the body loses its ability to fight off other infections like the flu or pneumonia that it could normally handle with ease.11 These infections are... children orphaned by AIDS in sub-Saharan Africa.32 Failing states are not able to provide care for these orphans so they are lured into crime or...four major program elements funded by USAID are primary prevention, caring for children affected by AIDS, home and community based care and treatment
Pipino, Marica; Boldrini, Elena; Cristani, Alessandro
The latest AIDS' congress (Barcelona) reminded the world this dramatic situation. The shown data are remarkable: 5 million people of new infected in 2001, 68 million people could die in the next 20 years because of AIDS and the biggest part of them is living in the South of the world. There are two different kind of AIDS: the AIDS of rich people (2% of infected ones), who can reach the modern therapies that changed the course of the disease now curable out of hospital, and the AIDS of poor ones, without therapies and future. The political-economic effort of Western governments, of global fund anti-AIDS and of non governmental organizations now is not able to answer to this emergency in the right way. The lacking sensibility of Western doctors and the inflexible position of Catholic Church about contraception make the situation more complicated. It's hopeful the overcoming of this position using a Catholic Church's precious concept: the distinction between "simpliciter" and "secundum quid" to agree the use of condoms in case of absolute need.
Department of Education, Washington, DC. Student Financial Assistance.
This publication explains what federal student financial aid is and what types of student aid are available. Two introductory sections present: federal student aid at-a-glance (what it is, who gets it, and how to get it) and finding out about student aid. The first section presents general information on the following subjects: student…
National Association of Pediatric Nurse Associates and Practitioners, Cherry Hill, NJ.
This brochure is designed to help parents answer the questions that their children may ask them about Acquired Immune Deficiency Syndrome (AIDS) and the Human Immuno Deficiency Virus (HIV), the virus that causes AIDS. It provides basic information about AIDS and HIV, as well as sources for further information, such as the National AIDS Hotline. It…
Discusses the history of first aid training provisions in the United Kingdom with respect to the outdoor industry, what to look for in a first aid training provider, an experiential model of first aid training, and the current National Governing Body requirements for first aid training for various types of coaches and instructors. (TD)
Kalichman, Seth C.
This book focuses on AIDS education and answers 350 commonly asked questions about Human Immunodeficiency Virus (HIV) and Acquired Immune Deficiency Syndrome (AIDS) taken from questions addressed to two major urban AIDS hotlines (Milwaukee, Wisconsin, and Houston, Texas). Chapter 1, "HIV - The Virus That Causes AIDS," discusses: the HIV…
A former member of the faculty at San Fernando Valley State College describes the experience of adding methodology, production and educational philosophy to a course which had been strictly hardware oriented." (LS)
Timmers, Renee; Hunnius, Sabine
In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226
Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.
Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.
Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine
In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.
Hsiao, Jhih-Yun; Chen, Yi-Chuan; Spence, Charles; Yeh, Su-Ling
Bistable figures provide a fascinating window through which to explore human visual awareness. Here we demonstrate for the first time that the semantic context provided by a background auditory soundtrack (the voice of a young or old female) can modulate an observer's predominant percept while watching the bistable "my wife or my mother-in-law" figure (Experiment 1). The possibility of a response-bias account-that participants simply reported the percept that happened to be congruent with the soundtrack that they were listening to-was excluded in Experiment 2. We further demonstrate that this crossmodal semantic effect was additive with the manipulation of participants' visual fixation (Experiment 3), while it interacted with participants' voluntary attention (Experiment 4). These results indicate that audiovisual semantic congruency constrains the visual processing that gives rise to the conscious perception of bistable visual figures. Crossmodal semantic context therefore provides an important mechanism contributing to the emergence of visual awareness.
Rüsseler, J; Gerth, I; Heldmann, M; Münte, T F
The present study used event-related brain potentials (ERPs) to investigate audiovisual integration processes in the perception of natural speech in a group of German adult developmental dyslexic readers. Twelve dyslexic and twelve non-dyslexic adults viewed short videos of a male German speaker. Disyllabic German nouns served as stimulus material. The auditory and the visual stimulus streams were segregated to create four conditions: in the congruent condition, the spoken word and the auditory word were identical. In the incongruent condition, the auditory and the visual word (i.e., the lip movements of the utterance) were different. Furthermore, on half of the trials, white noise (45 dB SPL) was superimposed on the auditory trace. Subjects had to say aloud the word they understood after they viewed the video. Behavioral data. Dyslexic readers committed more errors compared to normal readers in the noise conditions, and this effect was particularly present for congruent trials. ERPs showed a distinct N170 component at temporo-parietal electrodes that was smaller in amplitude for dyslexic readers. Both, normal and dyslexic readers, showed a clear effect of noise at centro-parietal electrodes between 300 and 600 ms. An analysis of error trials reflecting audiovisual integration (verbal responses in the incongruent noise condition that are a mix of the visual and the auditory word) revealed more positive ERPs for dyslexic readers at temporo-parietal electrodes 200-500 ms poststimulus. For normal readers, no such effect was present. These findings are discussed as reflecting increased effort in dyslexics under circumstances of distorted acoustic input. The superimposition of noise leads dyslexics to rely more on the integration of auditory and visual input (lip reading). Furthermore, the smaller N170-amplitudes indicate deficits in the processing of moving faces in dyslexic adults.
Bridwell, David A.; Roth, Cullen; Gupta, Cota Navin; Calhoun, Vince D.
Cortical responses to complex natural stimuli can be isolated by examining the relationship between neural measures obtained while multiple individuals view the same stimuli. These inter-subject correlation’s (ISC’s) emerge from similarities in individual’s cortical response to the shared audiovisual inputs, which may be related to their emergent cognitive and perceptual experience. Within the present study, our goal is to examine the utility of using ISC’s for predicting which audiovisual clips individuals viewed, and to examine the relationship between neural responses to natural stimuli and subjective reports. The ability to predict which clips individuals viewed depends on the relationship of the EEG response across subjects and the nature in which this information is aggregated. We conceived of three approaches for aggregating responses, i.e. three assignment algorithms, which we evaluated in Experiment 1A. The aggregate correlations algorithm generated the highest assignment accuracy (70.83% chance = 33.33%) and was selected as the assignment algorithm for the larger sample of individuals and clips within Experiment 1B. The overall assignment accuracy was 33.46% within Experiment 1B (chance = 06.25%), with accuracies ranging from 52.9% (Silver Linings Playbook) to 11.75% (Seinfeld) within individual clips. ISC’s were significantly greater than zero for 15 out of 16 clips, and fluctuations within the delta frequency band (i.e. 0-4 Hz) primarily contributed to response similarities across subjects. Interestingly, there was insufficient evidence to indicate that individuals with greater similarities in clip preference demonstrate greater similarities in cortical responses, suggesting a lack of association between ISC and clip preference. Overall these results demonstrate the utility of using ISC’s for prediction, and further characterize the relationship between ISC magnitudes and subjective reports. PMID:26030422
Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.
The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement <0.01 and p-value of period 0.12). This study demonstrated that audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.
Rosen, Jamie; Albin, Stephanie; Sicherer, Scott H
Studies reveal deficits in caregivers' ability to prevent and treat food-allergic reactions with epinephrine and a consumer preference for validated educational materials in audiovisual formats. This study was designed to create brief, validated educational videos on food allergen avoidance and emergency management of anaphylaxis for caregivers of children with food allergy. The study used a stepwise iterative process including creation of a needs assessment survey consisting of 25 queries administered to caregivers and food allergy experts to identify curriculum content. Preliminary videos were drafted, reviewed, and revised based on knowledge and satisfaction surveys given to another cohort of caregivers and health care professionals. The final materials were tested for validation of their educational impact and user satisfaction using pre- and postknowledge tests and satisfaction surveys administered to a convenience sample of 50 caretakers who had not participated in the development stages. The needs assessment identified topics of importance including treatment of allergic reactions and food allergen avoidance. Caregivers in the final validation included mothers (76%), fathers (22%), and other caregivers (2%). Race/ethnicity were white (66%), black (12%), Asian (12%), Hispanic (8%), and other (2%). Knowledge tests (maximum score = 18) increased from a mean score of 12.4 preprogram to 16.7 postprogram (p < 0.0001). On a 7-point Likert scale, all satisfaction categories remained above a favorable mean score of 6, indicating participants were overall very satisfied, learned a lot, and found the materials to be informative, straightforward, helpful, and interesting. This web-based audiovisual curriculum on food allergy improved knowledge scores and was well received.
Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang
Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses.
Krieger, N; Margo, G
Around the world, more and more women--principally poor women of color--are being diagnosed with and are dying of AIDS, the acquired immune deficiency syndrome. Yet, effective and appropriate prevention programs for women are sorely missing from the global program to control AIDS. To help us understand why this gap exists, and what we must do to close it, the three articles in this issue focus on women and AIDS. Examining the situation in such countries as Zimbabwe and South Africa, as well as in other economically underdeveloped and developed regions, the authors argue that women with the least control over their bodies and their lives are at greatest risk of acquiring AIDS. For example, the high rate of infection among women in Africa cannot be understood apart from the legacy of colonialism (including land expropriation and the forced introduction of a migrant labor system) and the insidious combination of traditional and European patriarchal values. Only by recognizing the socioeconomic and cultural determinants of both disease and sexual behavior, and only by incorporating these insights into our AIDS prevention programs, will we be able to curb the spread of this lethal disease.
Numerous cultural practices and attitudes in Africa represent formidable obstacles to the prevention of the further spread of acquired immunodeficiency syndrome (AIDS). Polygamy and concubinage are still widely practiced throughout Africa. In fact, sexual promiscuity on the part of males is traditionally viewed as positive--a reflection of male supremacy and male sexual prowess. The disintegration of the rural African family, brought about by urbanization, the migrant labor system, and poverty, has resulted in widespread premarital promiscuity. Contraceptive practices are perceived by many as a white conspiracy aimed at limiting the growth of the black population and thereby diminishing its political power. Condom use is particularly in disfavor. Thus, AIDS prevention campaigns urging Africans to restrict the number of sexual partners and to use condoms are unlikely to be successful. Another problem is that most Africans cannot believe that AIDS is sexually linked in that the disease does not affect the sex organs as is the case with other sexually transmitted diseases. The degree to which African governments are able to allocate resources to AIDS education will determine whether the epidemic can be controlled. Even with a massive outpouring of resources, it may be difficult to arouse public alarm about AIDS since Africans are so acclimated to living with calamities of every kind.
trade and comerce. 04 Ed=iu Learning theory, training techniques, instructional media ard technology, % teaching aids and methods, testing, Library...LANDING P/04 ACIIIVEW TESTING 8/04 AIDS, NAVIGATIOPAL P104 ACIDS, AIINO C/02 4 AIDS TEACHING B/04 ..-. ".. ACIDS LYSERGIC C/09 AIR ASSAULT l/07." ACOUSTIC...THERMAL RADIATION T/10 TACTICS JOINT 3/06 THERMAL REACTORS ?/04 .. .. TAKEOFFS AIRCRAFT A/07 THERMOCHEMISTRY /02 TANK CRES S7 THERMODYNAMICS /10 TANK
Boshell, J; Gacharná, M G; García, M; Jaramillo, L S; Márquez, G; Fergusson, M M; González, S; Prada, E Y; de Rangel, R; de Cabas, R
Between January 1984 and December 1987 a total of 178 AIDS cases were reported to the Colombian Ministry of Health. The location of these cases suggests that the human immunodeficiency virus (HIV) is widely distributed in Colombia. Most of those afflicted (97%) have been adult males. HIV seroprevalence studies of selected population groups revealed the highest antibody prevalence (5.65% in females, 22.5% in males) among individuals involved in high-risk behaviors who participated in a free AIDS testing program. High prevalences (from 0.6% to 3.9% in females, and 14.6% to 15.9% in males) were also found in patients (primarily female prostitutes and male homosexuals) attending clinics for sexually transmitted diseases in several urban areas. The number of AIDS cases in Colombia has doubled or tripled annually since reporting began in 1984, a pattern similar to that observed worldwide.
Pacheco, James Edward
The breacher's training aid described in this report was designed to simulate features of magazine and steel-plate doors. The training aid enables breachers to practice using their breaching tools on components that they may encounter when attempting to enter a facility. Two types of fixtures were designed and built: (1) a large fixture incorporates simulated hinges, hasps, lock shrouds, and pins, and (2) a small fixture simulates the cross section of magazine and steel-plate doors. The small fixture consists of steel plates on either side of a structural member, such as an I-beam. The report contains detailed descriptions and photographs of the training aids, assembly instructions, and drawings.
Morales Suárez-Valera, M M; Llopis González, A; Ballester Calabuig, M L
Between 1985-1989 were diagnosed 376 cases of TBC in "La Fe" hospital in Valencia. 36 of this cases also had AIDS. We have carried out a compared study among the 340 cases of TBC and the 36 cases of AIDS+TBC. In this way we have described the social and work conditions of both groups, the hospitalization, the associated pathologies, the different risk factors, the different characteristics of the disease in each group of TBC, the diagnoses methods and the treatment of each case.